not all change is progress
June 13, 2016
Direct download links: MP3 & Ogg
01:06:18 Devil’s Advocate
01:18:32 Net Neutrality Feedback
Following a spin around the latest news stories and a rummage through our postbag, Paddy played the role of Devil’s Advocate to suggest that maybe some features typical of FOSS development result in lower code quality, and have led to a blind acceptance of that as the norm. We rounded off with your take on our recent Net Neutrality debate, which teased out some of the nuances we didn’t hit the first time around.00:04:38 News
Announcing the ownCloud Foundation
We are Nextcloud – the future of private file sync and share
Nextcloud is the future of open source file sync and share
ownCloud Statement concerning the formation of Nextcloud by Frank Karlitschek
New versions of Firefox prepare for its biggest change
Get ready for Google’s proprietary Android. It’s coming – analyst
The app boom is over
Mobile Ad Blockers Have Reached Scary Proportions: The Wrecking Ball of the Free Internet
BBC Micro:bit computer now available to all for £13
Investigatory Powers Bill passes through Commons after Labour backs Tory spy law
These big-name laptops are infested with security bugs –
Tmux support arrives for Bash on Ubuntu on Windows
The Number Of Linux Games Has More Than Quadrupled In The
Past Two Years
Seven months later, Valve’s Steam Machines look dead in the water
Mozilla will fund code audits for open source software
A huge thank you to Alternative Armies Southwest for joining the ranks of our Monthly Supporters, and to all of the existing members of this exclusive band. You guys keep the show solvent and on the rails.
Félim Whiteley got in touch to bemoan the horrors of the system update process on Windows, whilst Martyn and Will had some thoughts about our recent piece on Cryptomator.
Popey chipped in on the trustworthiness of crowdfunding platforms and, along with Keith Zubot-Gephart, to praise the Android integration available with Pebble smartwatches. And Joe would again like to thank Paul Gleeson for gifting him one of the original Pebbles.
01:06:18 Devil’s Advocate
With seemingly never-ending incremental updates being built in the open, and contributions of varying quality, is it any wonder that sometimes FOSS projects don’t produce ideal code? Paddy is pissed at constantly hearing that “all code has bugs”, and wondered if our development processes haven’t contributed to normalising this phrase as an unassailable statement of fact, rather than being a state of affairs to regret.
01:18:32 Net Neutrality Feedback
Boy, did we get a lot of feedback after our recent discussion about Net Neutrality. Apologies if you didn’t get a shout-out, but there was only so much that we could cover. Thanks to Stephen, Eric, Dridi Boukelmoune, Félim and Will for your thoughts, and to everybody else who contributed.
Splitting up Firefox’ GUI and everything behind it is
just a first, necessary step. They first have to get the
IPC between GUI and rendering engine right before they
can have several instances of rendering engines,
When just splitting GUI and backend can double the RAM usage for some users there clearly is some optimization or just removal of formerly shared state to do.
You could also limit the impact of a crashing tab by just having a worker process for, let’s say, every five tabs. Or start with a process for every window.
Yes, this is true. I have followed the progress on multi-process Firefox and the developers do talk about adding more processes in the future. There could be separate processes for add-ons or graphical rendering and possibly multiple processes for tabs (though likely not one per tab like Chrome).
Mozilla has been working on multiprocess Firefox for at least five years so add-on developers have had time to adapt to it. It is disappointing that something big like NoScript doesn’t work completely (my understanding is that it does somewhat work; it can block scripts but some of the other features don’t work). The timing is not very good for add-on developers. With XUL deprecation already announced, developers know that they have to port their add-ons to the WebExtensions format but the API is not full featured yet. If I had a complex add-on (I do have two simple XUL add-ons but they were easy to port), I would be hoping to hold out for a way to port straight to WebExtensions to avoid porting twice.
Joe didn’t sound too convincing arguing why Google can’t make Android proprietary but I hope he is right.
Regarding bugs, I’d be curious to know what Paddy thinks of agile development. The word agile has been overused to the point where it doesn’t mean much but it still carries weight with some people (where I work there is no way we could get a dedicated conference room for our team, but we somehow were able to get one by casting the proposal as an “agile team room” rather than a conference room). In the current software climate, first mover’s advantage is important, so many developers emphasize creating a viable product over a polished one.
Still, allowing developers to get rid of longstanding bugs and technical debt might shorten the release cycle. If you’re able to get the corner cases mapped into automated tests you probably also improve the code coverage of your tests a lot. But, of course, testing GUIs is hard to implement and there have to be tests to begin with…
Still and again I recommend “The Phoenix
I’m especially curious what Paddy thinks of agile & devops before and after he read the book. A Luddite(‘s|s’) Book Review so to say.
It’s very entertaining and I bet everyone in IT will recognize some problems they’ve faced themselves before ;)
As a user I find software bugs quite frustrating and often wish developers would spend more time on stability then new shiny features.
Something that was missing from the discussion was any distinction between kinds of bugs, I have the impression you were focusing on bugs caused by mistakes in code (the kinds caused by sloppy practice, which could have been avoided by taking more care).
For small, single purpose libraries, I generally agree with your comments.
However things become more complicated for larger applications, where issues are often caused by interactions between moving parts.
Many bugs aren’t caused by simple mistakes, and may be caused by…
– Non-standard configurations (combinations of
– Bugs in other peoples code/libraries.
– Issues with hardware/drivers the developers can’t redo.
– Functionality which is working as intended, but performs poorly in some conditions.
– Harmless bugs (glitches or quirks) which don’t prevent people from using the software.
While in an ideal world all bugs should be fixed…
In practice, when only a _very_ small number of users experience an issue, or its a obscure corner case.
I think it can be reasonable to define them as low priority, and continue with other development.
(assuming they aren’t security issues).
This can be a social issue too, users can be very pushy,
demanding you to *fix* some behavior they see as a
Even when its disputable that the change they want is even an improvement.
(This contributes to the shoulder shrugging in my case).
Everyone can make mistakes, so it’s a bit unfair to point
at single developers who made a simple error,
if we really want stability we need teams of people to be responsible for code, review each others work…etc.
There will still be bugs, but at least then its not an oversight from a single person.
There a FLOSS alternative to slack which is close to
feature parity: mattermost.org
You can run your own server and hook it into things like GitLab, Nagios and Trac.
Seems to have @-mentions and hashtags and has support for markdown.
Haven’t tried it yet (and, btw, never touched slack), but we’ll soon have a mattermost server at work so there’s a bit more educated rambling to come ;)
I work on proprietary software for a particular industry and it’s all much worse than FOSS. We do not do anything unless there is money in it. Reputation, security be damned, we have no standards, minimal QA, cash determines the direction of the software.
We usually just develop the worst 3 day vaporware for prospective clients who are interested in some feature, then we advertise it with full page adverts in industry rags!
We use an abandoned compiler and an old database, simply because their is no money in refactoring or fixing anything. Developers are not allowed to work on any feature that would cost QA dollars. There is a committee that determines what goes into a sprint – so I can’t fix obviously broken stuff, it has to just sit there.
We are the leaders in our market segment, so go figure.
More recently I avoid generalizing about open/closed
Since open-source can range from anything thrown up on github…to line Linux kernel … and closed-source can be anything from a one man show to a large company.
You could also split software that does/doesn’t use DVCS/agile/pair-programming/IDE’s… etc. and make just as vague and unverifiable statements.
Nevertheless, I’ve heard similar horror-stories / anecdotes, (regarding successful closed software projects, – running on many games consoles, for example,…, and OS kernel code & popular game engines).
A while back I talked at length on the topic with a developer who worked on popular closed source software, about best practices and coding-standards… he noted that he found the standards much more loose where he worked compared with his experience with open-source.
– Open-source contributions have mixed quality (that
includes a *lot* of low quality patches), so you need
to be more strict about what kind of code is
– Commit access needs to be earned (not something you’re given as soon as you’re hired).
– With a company that hires talented developers there is the assumption that you know what you’re doing, so once you’re working for them, there is less scrutiny over your code.
– On the individual level the knowledge that everything you do is public makes some difference too since it points back to you (even if you change jobs or relocate), there is some professional self-interest, not to be associated with horrible code.
Hi, a clarification for one of the points raised
on the show, which I think was misunderstood…
Regarding “Open-source contributions have mixed quality”
By this I mean, if you’re a popular open-source
projects, you will get a _many_ drive by patch
submissions from developers you don’t know, where
the maintainers need to evaluate if the patch can
be accepted into master.
In this case you need to check it carefully and *can’t* assume their code meets your standards.
Because of this, it makes sense to document what you _do_ expect, so you can link the patch authors documentation that defines why their patch doesn’t meet your standards.
What I’m getting at – is that its typical a patch submitted will not be accepted on its first revision, its typical to go through some review process because — the alternative is to accept mixed quality code and resolve the technical debt later.
Which is not *necessarily* bad, Pieter Hintjens
(ZeroMQ author) makes a compelling argument for
this in an interview he did
http://5by5.tv/changelog/205 ~ as long as the community is able to stabilize code for releases, I guess its up to each project how strict they are about patch submission.
Of course developers of closed-source software may do internal code-reviews, and there is a movement to *internal* open-source ~ this is why its hard to generalize.
You should check out “The Phoenix Project”,
The entertaining story talks a lot about getting rid of technological debt and improving how everyone in IT works. It’s pretty devopsy but you might be able to plant some seeds by handing or recommending it to the more receptable colleagues after you’ve read it yourself ;)
Comments are now closed.
The content of this website, and that of the podcasts produced by the website owners, is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.