not all change is progress
October 25, 2015
Direct download links:
MP3 &
Ogg
0:02:03 News
0:55:37 Over a Pint
With global PC sales in free-fall, is a move away from general purpose computing and towards a cloud-focused future inevitable? If our news stories this week are any guide, it’s certainly looking that way. Plus, we chew over whether an apologia for the state of the KDE shell might just point to some wider uncomfortable truths about the Linux desktop ecosystem.
0:02:03 News
OpenBSD marks 20 years with a little help from the
Beatles
19 Years of KDE
History: Step by Step (GNOME turned 18
back in August)
Happy 11th Birthday, Ubuntu! and happy
12th, CentOS
Mozilla Launches Open Source Support Program
Google kills Chrome notification center, figures you probably
won’t care anyway
Chrome dumps “OK Google” eavesdropping extension because
nobody actually uses it
Google Goes After Microsoft & IBM By Making Google Apps For
Work Free While Customers Still Under Competitor’s
Contract
Cloud Software Companies Still Growing, Still Not
Profitable
5 Things We Learned About AWS from Amazon’s Q3 Earnings
Cringe-worthy “PC Does What?” campaign wants you to
upgrade
Windows 10 isn’t necessarily driving a big PC refresh cycle,
AMD CEO says
You know when you spill your drink but keep on dancing
anyway? That’s totally Intel right now
Intel inks $8bn debt deal, preps for Altera buy
Handy results put ARM on track for a muscular year
Meizumart Is Closed, Meizu MX4 Ubuntu Edition No Longer Has a
Home
Update on Ubuntu Phone security issue
Ubuntu Phones Will Run Any Linux Application on Top of Unity
8
Announcing
the UbuCon Summit
Kubuntu Lead Has Stepped Down – But It Isn’t The End Of
Kubuntu (we spoke to Jonathan back on show #43)
Solu: the Finnish pocket computer that wants to take over the
world
The
Story of MicroPython on the BBC micro:bit
Joe mentioned that he’s organised another podcaster meetup in London for early November.
And, finally, huge congratulations to Andy Mitchell, who was lucky enough to have his name picked out of the hat in our OSCON ticket competition. Enjoy the show, Andy!
0:55:37 Over a Pint
We wondered whether a blog post from Martin Gräßlin blaming most everyone else except the KDE team for any issues that users have been suffering with Plasma 5 might offer some broader insights into the failure of Linux to gain significant desktop market share.
Speaking of arm curbstomping x86 in the server space, https://www.96boards.org/ is a joint effort between Linaro and the Linux Foundation to come up with a standardized raspberry-pi like form factor aimed at rackmounting in dataservers. (Their objection to pi is every release so far has changed the external connector layout, so you have to put ’em in a different case. If you want to bulk rackmount tiny arm boards, you want a standardized “0.1 u” form factor.)
You might want to interview either Jon Masters (Arm guy at Red Hat) or David Mandala (96boards guy at Linaro) or Kate Stewart of the Linux Foundation. All of ’em can tell you about that stuff.
Rob
(P.S. The reason “ACPI for Arm” exists is Microsoft is giving companies like AMD a lot of money to work on that instead of Device Trees, because Azure needs it to run on ARM.)
(P.P.S. It’s the way the bootloader and OS determine what hardware is installed on a given machine during system boot. ARM hasn’t got a BIOS, so it needs a way to enumerate memory and busses and so on that are a chicken and egg problem to probe for. Device Tree is the data format Linux created for this (loosely based on OpenBios from Solaris and PowerPC). ACPI is the one Intel created for the Itanic because depending on 16 bit 8086 code to bring up an ia64 system was embarassing even though it worked fine, and then they backported it to x86 when Itanic sank because they’d spent so much money creating ACPI they couldn’t afford to stop or they’d have to write it off and find somebody to blame. Microsoft uses ACPI on x86 because Intel said so, and being Microsoft they’d much rather pay hardware vendors to support it on other platforms than write new code.)
Very cool link. I work in the HPC space, and x86 is definitely still king, but I could foresee ARM platforms getting a toehold in certain applications, especially when power is significant.
Heh, going way back to my interview you’ll remember I was pessimistic about Linux on the Desktop ever happening. (Three distinct failure modes, and preinstalls matter.)
But Linux on the smartphone is the common case today, via Android. It’s not a particularly traditional Linux, but it’s not that much more weird than Gobolinux. Getting a terminal on Android is actually pretty straightforward, just install “Terminal Emulator” from the play store (I used Jack Palevich’s, which has 10 million downloads). This is built out of a default “Terminal” app in the Android Open Source Project, which is sadly not installed in the base image by default but I’m hoping they include it in a future version.
When you interviewed me I was hoping to get Android to merge toybox, to make android self hosting (as described at http://landley.net/aboriginal/about.html#selfhost) . Android merged toybox back in January (https://lwn.net/Articles/629362/) and since then the second most active contributor to toybox (after me) is Elliott Hughes, the Android Bionic and Toolbox maintainer. Android M includes toybox alongside toolbox, and we’re working to replace more of toolbox with toybox. (Elliott sends periodic patches to http://landley.net/toybox/roadmap.html#android to update the current status of this work.)
Android has decided to use vanilla unmodified toybox in its releases. (Android’s toybox git repository at https://android.googlesource.com/platform/external/toybox/ includes a lot of local commits, but they’re all to the android makefiles that build toybox as part of AOSP and selects which commands to enable and so on.) This means they push patches upstream to me so they can then pull and use them in Android.
Toybox’s 1.0 design goal includes enough commands to build Linux From scratch (as described in http://landley.net/toybox/roadmap.html#dev_env and emperically tested in my Aboriginal Linux project via http://landley.net/aboriginal/control-images). Then I need to apply http://landley.net/aboriginal/about.html#hairball to AOSP and try to get the android
(Note: Android releases are forked off of AOSP. The Android Open Source Project is roughly analogous to Red Hat Rawhide, it does continuous development which is periodically frozen and forked to produce release versions. So changes that go into AOSP wind up in a future Android release. If I can build AOSP under Android, the project becomes self-hosting.)
Opening android is a lot easier than getting preinstalls for Linux. Participating in Android development provides open source developers with leverage and traction to improve code deployed to literally billions of users, and the fact it’s deployed through multiple hardware vendors who make their own local modifications means there isn’t a single monopoly voice making unappealable decisions. (Some voices are definitely louder than others, but there’s at least the potential to work towards a white box hardware ecosystem with Android distros. Free software purists don’t see any difference between Android and iOS, but iOS is a single monopoly vendor who sells hardware and a single software image directly, there’s potential for whitebox hardware or distros. There’s no leverage to improve upon the status quo of iOS from outside Apple.)
PC hardware didn’t start open, it started partially open and was forced the rest of the way open by whitebox manufacturers (most prominently Compaq cloning the BIOS and surviving a lawsuit from IBM). Android is similarly partially open, and can similarly benefit from ongoing work to increase both its openness and its usefulness to developers authoring rather than merely passively consuming content.
So that’s what I’m trying to do. And it’s why the failure of Linux on the Desktop doesn’t hugely bother me anymore.
I would not write off Linux on the Desktop. Price, and critically, PERFORMANCE issue with low-end PCs as consumers are stressed can make a big difference. Recently I challenged myself to really USE my Windows partitions, for every day use: Libreoffice, Ruby (for scripting), GnuCash, were the apps I used and I found Windows HORRIFIC.
Not only did things look remarkably uglier (Windows 8.1), compared to Xubuntu/XFCE with MacBuntu theming, but performance was even uglier. On an AMD, 1.3 Ghz dual core cpu machine, with 4 GB of RAM, .5 GB shared with the graphics card, Windows was slow. Very slow. But Xubuntu ran as it always did very fast, and flawlessly.
People who don’t use Windows regularly forget how horrific updating can be. Literally I had to use another computer while mine ground away downloading and installing updates. By contrast even a kernel update on Xubuntu doesn’t take very long — minutes — and I can use my computer whilst its updating which is impossible in Windows on budget machines.
Updating applications is even worse — you need to uninstall and then download the newer binary and reinstall, instead of apt-get upgrade. Command line? To do almost anything beyond very simple stuff, you must use the command line on Windows as well. Windows even has like Thunar an “open command line here” function. Invoking the Windows version of rsync, robocopy, for the very common use case of copying files to a backup location, has to be done by the command line. [Robocopy is available on most recent versions of Windows.] The Linux Command line is both easier and simpler, the learning curve to set up and maintain your computer is IMHO far easier in Linux than Windows.
But performance and ease of use are likely to be critical for those buying cheap PCs and are likely to drive vendors to either Chrome book or straight Linux. The straight Linux option has the advantage of loading vendor crudware / adware which is likely to be attractive. Already I see a lot of cheap PCs ala the HP Stream 13 inch version, ultra lightweight and portable. People are already buying the Stream and wiping out Windows to boot into Linux. Which is a nicer solution for a limited HD space computer than Chrome plus Linux, and solves the “can’t print to an ordinary printer” problem of Chrome. One thing Linux is excellent on is printer support, I can print to my ancient Apple LaserWriter 360 (using a USB/Parallel adapter) very easily in Linux. And just as easily to my new Brother Wireless Laser Printer.
For Vendors like HP and Dell offering very budget machines with better performance than Windows can allow while not butting heads with Google over crud/ad-ware would seem to me to be very attractive. Add in that you can make very easily something like XFCE or LXDE look far more beautiful than Windows, with better font rendering and eye candy with superior performance, on the same exact hardware, and you’ve got a winner.
I would be very surprised if Linux specific budget machines did not show up next year, at the very least hardware vendors want an alternative to Microsoft which is now their direct competitor in hardware.
You mention chromebooks, but Google just killed chromeOS and folded it into Android: http://www.theverge.com/2015/10/29/9639950/google-combining-android-chromeos-report
By “show up next year”, do you mean the big
splash when Dell started selling Ubuntu machines
in 2007:
http://www.computerworld.com/article/2536617/operating-systems/a-year-later–sales-of-linux-on-dell-computers-continue-to-grow.html
And then again in 2012:
http://www.itworld.com/article/2724598/it-management/why-dell-is-selling-linux-again.html
And in theory they’re doing it again this
year:
http://www.techrepublic.com/article/dell-is-back-in-bed-with-linux/
And yet today http://dell.com/linux redirects to a 404 error.
Yeah most low end desktops and laptops computer you would find at a store definetly do not have ssds which would be most of the performance gain. Although if you want to run virtualization an upgrade for hardware enablement might be nice.
Regarding problems with OpenGL driver bugs, this has been
our experience too with Blender development,
We get continual bug reports regarding driver issues
(relating to specific OS/driver/hardware combinations) we
can rarely redo.
You mention using new-features being a possible
cause,
If you avoid OpenGL altogether, then yes – you can avoid
all the driver bugs too.
(not using a compositing WM here).
Assuming hardware acceleration is worth having, then its
quite hard to avoid bugs – in short,
Old features can be poorly supported by newer drivers (or
implemented by the driver in software & slow), and newer
features aren’t necessarily supported at all.
Commercial applications tend to certify only with a limited set of hardware because of this
Regarding the current state of KDE and Plasma 5 from an
ordinary ticked-off users perspective…
I want to know about the developers insistence the move
from KDE4 to Plasma 5 was a good one promising so much.
If I understand this correctly now some dev’s are now
throwing their arms in the air trying to address issues
that they suggest are not of their making?
Wow… who made these decisions? Did Intel and AMD suddenly
go rogue and ditch Linux leaving it in the lurch?
Or is there really nobody steering the Linux ship?
You need only look back on the developer blogs from the
corresponding period to see the enormous faith certain
developers placed in Plasma 5 and the QT tool kit. All I
want to know is when did this Plasma 5 hype in reality
become less about progress and more about the untruths
and covering up the mistake of committing to it?
Or am I just plain wrong here and this is just a
miscalculation from one of the major Linux desktop
players…
Would be best to get some comment from KDE guys here,
To speak in their deference… its really hard to know (in advance), what problems you run into when building on top of a new technology stack. Especially one which is both a moving target and has multiple implementations.
If you use any hardware acceleration you rely on good
driver support (OpenGL/OpenCL/Cuda… etc),
For sure the KDE guys would have been aware that
driver incompatibilities exist,
but how to the kinds of problems driver bugs are
going to cause you … and at what scale?
How would you know (ahead of time) if this is causing
some instability for a single release … or continual
headaches for years.
As far as I can see the – only thing you can do is to
look into similar projects, to see what kind of
compatibility issues they faced… and in the case of
KDE, what would you compare it to?
OSX and Windows both use hardware acceleration on the
desktop, so the KDE guys were not exactly going
out-on-a-limb when they considered to do it too.
However OSX has tightly controlled OpenGL support
(they’re involved in the driver development), and
Microsoft defines both DirectX and the
desktop-environment.
… so, they took a risk by doing something new and not well tested on the Linux desktop, and the project suffers the consequences.
Its a shame … you can place the blame on applications/driver-devs/distros… but at least appreciate its not _only_ the fault of KDE when drivers cause problems.
For the record – while Linux driver support may not always be as good – OpenGL driver bugs are not isolated to Linux, there are many driver bugs on all platforms.
For me it sounded like they don’t even bother to
report the issue upstream to the devs of Xorg &
the Linux kernel.
Even then I would be really surprised if RedHat
and Canonical don’t pay someone to backport fixes
for graphic drivers.
At least there should be some HowTo they can
point bug reporters to.
“That’s a known issue, you find the bugreport at
xorg.org/bugs/….
If it’s marked fixed but you still see this
problem open an issue at your distributions
bugtracker.”
I know that’s work too but then you can point everyone in the direction of the devs of the buggy code. Because when they don’t know there are a bunch of people actually seeing this bug why should they fix it?
Agreed that Google Docs are really only good for simple spreadsheets, word docs, etc. I must also say that LibreOffice has the strength to pull off heavy duty projects. At the same time I do agree that most, if not all, in my field use MS word and MS excel. As for the future of PCs, there are plenty of things a PC can do that a tablet can’t – maybe that will change in the future. For I do remember when the Desktop was a necessity and a Laptop was just for light work. . Great show!
You know Linux ABI hasn’t changed since 1.0, right?
There are ppl testing this with binaries build in like
1992.
Bryan Cantrill said in a talk that’s great because all
the cloud container stuff running on their SmartOS (based
on Illumos/OpenSolaris) just needs to implement the Linux
ABI to run all the docker images (in Solaris zones, of
course). It’s like universal binaries for the cloud.
Windows carries A LOT of technical dept. A NetApp guy
once told us they can deduplicate 80% of blocks
containing the OS on windows VMs from Win 2000 to the
then current 2008r2. If you want newer TLS in your
windows-based application you have to bundle your own or
move your customers to a new version of windows.
Their approach to backwards compatibility seems somewhat
like chrooting old services into a RHEL 4 userland to me.
If you want support for 10+yrs you have to pay someone.
Either license fees or the folks maintaining your
platform.
There’s a company where you can get support and fixes for
FreeBSD 10 for the next 20 years. I’m sure there are
still ppl running Solaris 9 code in branded zones on
Oracle’s Solaris. If you got some SCO Server, there’s
someone who will make sure it will still work beyond
2020.
But people who do this kind of work as a labour of love are very rare. And also hindered by restrictive licenses of stuff like AT&T UNIX.
In the land of BSD-based appliances (NetApp, Juniper,
Netflix, …) companies are moving to following -CURRENT so
they don’t have to backport new drivers and features to
their codebase branched of 8 years ago. This way they
don’t have to deal with huge changes like moving from
RHEL 6 to 7.
You don’t even need to base you product on the very
latest version of FreeBSD. Stay with -STABLE (where
releases are branched of from) and have your devs test on
-CURRENT. that’s way easier then base on RHEL/CentOS and
test on Fedora.
Wandered of into BSD-land again, didn’t I?
Well, maybe Canonical will solve all the issues with snappy packages™ and containerization of the Linux Desktop …
Comments are now closed.
The content of this website, and that of the podcasts produced by the website owners, is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.