not all change is progress
June 9, 2014
Direct download links: MP3 & Ogg
openmamba GNU/Linux – UNIX-Haters Handbook, 20 years on – NixOS, GoboLinuxNews
Is Apple aping Open Source with search and client-side decorations?
Steam Machines Update
Survey: Video gamers more social and more socially conscious
Ubuntu MATE Flavor Could Arrive Soon, Prototype Looks Great Already
True Goodbye: ‘Using TrueCrypt Is Not Secure’
Exclusive: Security enthusiasts may revive encryption tool after mystery shutdown
Weighing up the impact of Edward Snowden
The Post-Snowden Surveillance World: Network Effects, Low Marginal Costs, And Technical Lock-in
China state media calls for ‘severe punishment’ for US tech firms
Demand real surveillance reform
Reset the Net
Google announces alpha of End-to-End
Samsung Launches First Tizen Phone And It Is A Beast
GnuTLS Patches Critical Remote Code Execution Bug
Patch NOW: Six new bugs found in OpenSSL – including spying hole
The Linux Foundation’s Core Infrastructure Initiative Announces New Backers, First Projects to Receive Support and Advisory Board Members
Thanks for nothing, OpenSSL, grumbles stonewalled De Raadt
Paddy took a look at openmamba GNU/Linux, and Joe will be reporting back on Puppy Linux next time.
A huge thank you to Mohan Paul for the PayPal donation, and to johanv and several anonymous Flattrers.
Love Linux and enjoy talking about it? Want to be an occasional or regular voice on the show? Email us a two minute or so audio clip explaining a bit about your background, and what you think you could bring to our humble podcast.
We kicked off the feedback proper with Joe reporting back that sadly, no, VLC isn’t the cause of his screen blanking issues with the most recent version of Xubuntu.
Marek Miczyk had some kind words for us, ed asked us to stay away from neologisms, and Brian36 was pleased to hear that it’s not just him having issues with Skype and PulseAudio.
Rich B has been having difficulties with MTP and a Samsung Galaxy 5. We talked a little about AirDroid and SuperBeam, but other suggestions from listeners would be welcome.
Spacebat inadvertently trailed our look at NixOS and GoboLinux by writing us about transactional package management.
Nathan D. Smith gently took us to task over our use of the term ‘Luddite’, whilst Charlie Ogier continued trying to help Joe in his quest for a decent wireless and trackpadded keyboard.
Cathryne attempted to garner more feedback on her previous question about funding and influencing development, to which Campbell Barton responded. But if there are other developers listening, we’d love to hear your thoughts on this topic too.
Andy Jesse had a close shave on the disaster recovery front, which prompted a question about what good disk/file recovery tools exist for Linux systems. Do let us know what your favourites are, and we’ll provide a review and round-up on a future show.
Thinking about LXQt and Android ‘intents’, Félim Whiteley pointed out that KDE already supports custom Service Menus. We’ll have to see how LXQt takes this forward, but Paddy’s initial impressions are that what they’re looking at is something rather more powerful than what KDE offers.
Jezra and SonOfNed both offered thoughts on the Mozilla DRM kerfuffle, and Shay the Daft Punk, Mark and Mikael had things to say either directly about Canonical/Ubuntu, or our attempts to provide an opinionated, but fair, coverage of the Orange Beast.
The UNIX-Haters Handbook, 20 Years On
With the help of some thoughts from FriedEggs and SonOfNed – thanks, guys – we chewed the fat for a little while, reflecting on how much has really changed in the Unix/Linux world over the last 20 years.
This was a really awkward segment to record, as there’s so much in the book to talk about that we could easily have given over an entire episode to it. So we were left with a tough choice: devote a whole show to this topic alone, or take a fairly superficial look at just a few different areas. Since the former risked appealing to only a small section of our audience, we opted for the latter. As ever, though, we’d strongly recommend listeners read the book (3.5MB PDF) themselves; for the humour, the history, and to help put where we are today into some context.
NixOS and GoboLinux
If the UHH were written today, it’s likely that it would have a chapter dedicated to the state of Unix/Linux package management. We decided to have a look at NixOS and GoboLinux, both of which hold out the promise of a different, and potentially more effective, approach to this thorny topic. But do they deliver…?
Hi Luddites, I’ve been listening since episode 1 and couldn’t believe my ears when you mentioned the Ubuntu MATE Remix I’ve been working on with Popey. I thought I’d address some of what you mentioned.
My interview on UUPC was pre-recorded some days before it was published. Up till the day of recording Ubuntu MATE Remix was not something I had considered, after all I’m and Arch Linux TU. I have since learnt that others have started similar MATE on Ubuntu projects in the past. I’ve contacted each of them and invited them to join our effort. So far one project has joined ours.
Following the UUPC interview Popey did some tinkering, just enough to get my attention. We started prototyping against 14.04 last Thursday and are now building against 14.10.
As I said, I’m an Arch Linux TU. I don’t like bloated systems. I’ve done some work to trim down the resource rquirements and now have a full Ubuntu MATE Remix 14.10 running in ~120MB of RAM. This is on par with what I see on Arch Linux.
It was my idea to use the existing Ubuntu themes and artwork and I’ve applied some fixes to get them working nicely with MATE. You can see a newer screenshot below.
Currently I’m trying to figure out what needs to happen to get Ubuntu MATE Remix recognised as an official flavour and what the TO DO list is. I’m hopeful that we’ll have some kind of release for 14.10.
Keep up the great work, I think just the two of you works well.
Many thanks for the comments, and I do honestly wish your efforts with ‘Mubuntu’ well – I can see it being a massively popular offering. We’ll definitely be keeping an eye on that recently registered domain of yours for updates, and it would be good to have you on for a chat about the project when things are a little further progressed.
I’d be delighted to have a chat about MATE and Ubuntu MATE Remix at a later date. I’ve been advised to avoid the “Mubuntu” name for the time being, so this project is (for now at least) Ubuntu MATE Remix.
I’d also appreciate it if yourself and Joe could do a “first impressions” or review of some kind for Ubuntu MATE Remix when we have a beta or RC. Let me know if that something you could accommodate?
Your thoughts on distro organisation and package selection are largely in line with my own and I had your past “first impressions” in mind when I made the MATE LiveCD for a FOSDEM 2014 presentation earlier this year.
I’ll be hacking on the new site over the coming evenings. It will be a little thin on content in these early days but I’ll try and post some progress reports in a blog.
Martin – we’d be happy to have a look when you’re at the RC stage; betas can sometimes be quite misleading! Although we’ll be keeping an eye on your website, we’d appreciate a mail to show@ when you get there, and we should be able to organise an interview and review at pretty short notice (and no TZ issues will help matters) – thanks.
Agreed, testing a RC would be best. Right now Ubuntu MATE Remix is in a state that only the maintainer could love ;-) I’ll email show@ when we’ve got something reviewable. I’m in the UK so there shouldn’t be any TZ issues. The website should be more useful in about a week.
I’m not so sure about about the whole hobbyist track Joe went on. Like Paddy mentioned, most people don’t even know they’re using Linux. As far as The Year of the Linux Desktop goes, it depends entirely on what you make it. Some of the distros are by tinkerers for tinkerers, but I’d be very surprised if for example the city of Munich employs naught but tinkerers.
And besides, even early Microsoft saw what capturing hobbyists can do with their open letter, and we all know how that turned out. In this respect I agree with Rob Landley that we don’t have top down BFDL’s dictating the UX/GUI and that committees produce beige if you’re lucky, Unity if you’re not (paraphrased).
Note that having somebody driving is necessary but not sufficient for good aesthetics. Plenty of individual visions turn out a bit pants, especially as time goes on.
Ubuntu started out as what Mark Shuttleworth wanted a distro to be. In 2004 this was a glass of water in a desert. By 2008 it was showing cracks. By 2012 it no longer made much sense.
Not quite the full George Lucas, but you get my point. :)
Indeed, to have a nice UX you also need A/B testing. Even then, people have tested Unity and found it lacking, and this didn’t deter Canonical. Just A/B testing isn’t enough either, you have to take the results to heart even if it goes against your vision. Shark Jumping tends to happen otherwise: scrollbar overlays in Ubuntu, buttons gratuitously moving to the left, Mir NIH, Unity, etc.
Not that it’s just Canonical taking a leap before building the bridge or installing the trampoline. Looking at you there, systemd. Every time you look at it, it’s subsumed another bit of common infrastructure which you used to be able to swap out at will. I find this slightly problematic, not in the least because how long it took for PulseAudio to stabilise.
There’s value-add, which put Ubuntu on the map in the early days, and there’s change for change’s sake. So I agree wholeheartedly. Before going whole hog, you might want to ask what people think of your bacon from time to time – and take it to heart.
As far as full Lucas goes. He did leave us much in the way of computer graphics inventions, through Blinn, et al. Even if Canonical croaks, they have already shown that a Linux installer doesn’t have to be scary. That much aerial acrobatics involving dangerous marine wildlife was involved later in no way tarnishes that legacy, even if Ubuntu takes more after Jar Jar Binks than Obi Wan Kenobi these days.
Btw, Paddy. Can you add a link in the show notes to that Debian list NIXOS takedown? I’d be very interested in reading it.
This dates back to 2008 (and Nix has evolved a little
since then). The thread starts here:
but the most interesting comment is this one:
Thank you kindly, fellow Luddite.
Having caught up to the latest episode now (started the day before yesterday), I’ve sent a donation your way (a quid per episode, which I think is fair). Keep up the the good work, guys. (No need to publish this comment, just wanted to say thanks for the show in general and the links just now in particular).
Thanks, Jeroen – that’s very decent of you.
I wanted to stand up and say that I am at least one person who has changed his habbits as a result of the Snowden revelations. I used to be a Chromebook-toting, Android-weilding Google fanboy. Linux was just what I used to launch Google Chrome. I’ve since deleted both my Google accounts. I’ve taught myself how to build an Ubuntu server with OpenSSL, OpenVPN, Squid, and ownCloud. I’ve switched to Firefox, and I host my own Weave server. I’ve neutered my Android phone and tablet so that each runs without a Google account–though I still have no illusion that Google isn’t collecting my data regardless. My Chromebook now runs Linux Mint 13 Xfce.
Though I’m not quite neckbeardy, I am definitely more nerdy than the average tech user. It would have been great if the Snowden reveleations had inspired more change in the general populace, but the changes I made in my lifestyle weren’t easy, and I did have to give up a fair amount of convenience to achieve decent security. The average person who doesn’t know the difference between RAM and ROM won’t even begin to try and build his own server or put DD-WRT on his router.
The hurdle of convenience won’t be overcome until these secure products are made better, and made more user-friendly. That won’t happen until they pick up adoption and general interest. The best way to achieve that is for those who can, and those who care, to pick up these tools and make use of them. The more they are used, the more interest will be generated, and the more resources will be funneled to the developers. Alternatives won’t be found until we start looking for them.
That said, if you’re ever looking for topics, check out some of the projects featured over at http://www.redecentralize.org . Also, a good “starting kit” can be found at https://citizenweb.is/guide/.
Bitmessage key: BM-NBpoGQiAfM2NzHgQoCUY7snwDQ9C6L2A
Danny, I am glad to hear that you too have taken the Snowden revelations to heart. You have made an impressive and inspiring effort to mitigate the incredible threats posed by massive surveillance to democratic societies.
Vigilance and inconvenience are prices that we must be willing to pay if there is any hope of retaining civil liberties in the coming years. Thanks for standing up to be counted!
… And then later in the podcast…
Wait. What? A third luddite? Might I remind you that not all change is progress.
Eh, we just hold change up to a higher standard and make it prove its worth.
I’ve noticed a corollary to Moore’s Law: that 50% of what you know about computers is obsolete every 18 months. The great thing about Unix derivatives is it’s mostly the same 50% cycling out over and over.
In that the userspace api is relatively stable, but we keep hacking on memory management and scheduling and all that every year because the hardware changes around us? That’s how I think of it.
I really like the two-man format. Maybe bring in a 3rd for a segment from time to time? Or maybe have a rotating guest host? Not sure how much of a logistical nightmare that would be.
Why does everybody want to blame C for the recent string of crypto vulnerabilities? The problem is that for years cryptography was black magic only experts were allowed to touch, because anybody else would introduce subtle timing attacks and sidechannel information leaks ala:
Because of this, cryptographic code was not subject to the same scrutiny as other open source software. It never got reviewed because meddling non-cryptographers would be mocked for practicing cryptography without the proper initiation and knowing all the secret handshakes. Look at the horrible state of cooperation between Debian and OpenSSL that led to the 2008 key generation bug:
Post-Snowden, non-cryptographers broke down and started reviewing crypto _anyway_, looking for the flaws the NSA claimed to be exploiting. It didn’t matter whether we’d be mocked for messing about in the big boy pool, we knew we weren’t secure and wanted to figure out _how_. And once we lifted up this rock, we found all sorts of problems.
But what did we find? The iOS “goto fail” bug was a repeated line of code, not a buffer overflow blameable on C. The first gnutls bug was failure to set a return code (http://blog.existentialize.com/the-story-of-the-gnutls-bug.html). The heartbleed bug was magnified by the fact the developers wrote their own allocation routines because they didn’t consider the standard ones fast or portable enough; you think they wouldn’t pull similar “not invented here” stunts in another language?
The current round of openssl bugs does include a buffer overrun, as bug #3 on a 6 bug list. the others include a man in the middle attack (bad protocol, not bad programming), infinite recursion (not something python will be much happier about), null pointer dereference (java would also throw an exception that’d crash the server if not caught; you could set up a similar signal handler in C and re-exec /proc/self/exe but nobody ever bothers), a race condition, and an unspecified “denial of service attack”.
There’s an old saying, “C combines the flexibility of assembly language with the power of assembly language.” The beauty of the language is there’s minimal abstraction between the programmer and what the machine is actually doing. Implementing cryptography in higher level languages is bound to have the same kind of side-channel attacks (from timing to differential power consumption) that cryptographers have been warning against for ages, _and_ it won’t prevent the majority of the bugs in this sort of code.
Another old saying is that there has never been and never will be a language that makes it the least bit difficult to write bad programs, which remains true to this day.
Interesting comment, on a related note…
While I don’t want to harsh on OpenBSD guys, I wonder what you think of their take on security.
AFAICS they fixate a bit on simple bugs (buffer
overruns, int overflows for eg).
In some cases its reasonable to be paranoid – especially when there is some risk of bad input for eg. But I have the impression they identify errors that nobody could ever redo, replace `int` with `size_t` or `strcpy` with `strlcpy`, then tell everyone how many bugs they fixed… (ok, thats cynical but you see my point?).
Maybe they fix other more involved bugs too, Its just when I hear OpenBSD devs talk of fixes they often mention these kinds of simple bugs (which makes me think they are just defining bugs as any code which doesn’t conform to their strict guidelines).
Another thing, replacing allocator is fairly common practice (though perhaps in OpenSSL’s case it was ill conceived), not sure I’d consider this NIH.
– Python has own allocator.
– Firefox have own allocator for pools and arenas.
– samba has talloc
… there are many more, am sure you’re aware of that…
I was surprised to see OpenSSL get criticized so much for this, though with security related code it makes sense to have more rigid conventions too.
Campbell – the way folks in the OpenBSD milieu tend to sell their approach is to talk about a ratchet effect; by making small (and sometimes large) changes in a real, used, production system, they hope to incrementally drag others along with them, where the direction of travel is always towards a more secure OS and set of userland apps. In some areas (e.g. package management) they arguably lag other *nixen badly, but I guess that this is just a function of the devs only working on whatever interests them, and not seeking to appeal to a mass audience.
I’ve linked to this talk by Michael W. Lucas in
some previous show notes, and it is worth
watching if you haven’t already. Video start
timed to cut off the pre-talk cruft:
And here’s a much shorter talking head piece with
de Raadt, which reinforces some of the points
that Lucas makes:
Patrick – good point & definitely appretiate the work they do at this level (64bit time, OpenSSH, other kernel level stuff…).
My comments were more relating to their pedantic conventions,
Say for example I take an application and
replace every `strcpy` with `strlcpy`.
(note, I use `strlcpy` so and generally consider it a good-thing. but lets ignore that for a second)…
Now I can pat myself on the back for hardening the application against buffer overruns. But if you never managed to re-create any of these bugs, who’s to say anything is fixed? – maybe only copying part of a string causes a crash elsewhere (lookup for some ID returns NULL or so), or maybe the logic is flawed in other areas which you missed out because you were too busy concerning yourself with hypothetical bugs which never occur.
Its an odd argument to make because I’m generally all for having strict code standards, but you want to understand the code at a deeper level too, otherwise you miss errors in the codes logic (as Rob Landley pointed out).
They aren’t mutually exclusive though.
Not all classes of bugs are equal.
As a class, buffer over-run bugs in code that runs with elevated priviledges have been common to many of the most severe security exploits we have seen to date. These bugs can be exploited to 1) read kernel memory (e.g. heartbleed) or 2) branch into malicious code (while running with elevated priviledges) by manipulating call return pointers in memory stack frames.
Probably, out of the aggregate total, most individual buffer over-run bugs have not yet been exploited in such ways. But the fact that so many of the most severe exploits have been based on buffer over-run bugs argues strongly for trying to mitigate the potential damage when those exploits do occur.
The fact that so much OS and application code broke when OpenBSD first pioneered their memory exploit mitigations 10 years ago was not comforting in the implication of how common buffer overrun bugs may actually be in the *nix code base, although I was glad to hear Theo de Raadt mention (in the video Paddy linked to above) that most of that has been cleaned up in OpenBSD over the years.
I must confess that I was shocked to hear Theo say in the video that Microsoft had implemented most all of the OpenBSD memory mitigations, that the Linux kernel had some (but most were disabled by default) and that OpenBSD had none! That ruBSD conference was in Dec 2013 so that presentation wasn’t that long ago.
… s/OpenBSD had none/FreeBSD had none/ // should be FreeBSD had none of the mitigations
While I agree with most of what you say, I feel compelled to repeat an old joke:)
It’s easy to shoot yourself in the foot when writing
With C++ you can blow whole limbs off.
In Pascal the cartridge won’t fit in the gun because it’s the wrong type.
In Visual Basic it’s like shooting your self in the foot with a water pistol. I large projects the whole lower half of the body may become waterlogged.
If you want to get out of the mess that is MTP, I would
recommend looking at rsync backup for Android:
I use it to sync my photos (and other files) from my phone to my desktop, where it’s obviously much easier to manage them. It integrates with Tasker as well so you can trigger individual sync profiles (one- or two-way) when desired. It takes a moment to set up, being very configurable and even allowing you to manually set rsync switches, but you’ll save tons of time vs MTP since you only have to care about it once.
I was going to suggest Joe a similar thing. I know it is proprietary software, but BTSync works really well for that purpose:
Set her phone to sync her photo folder to her desktop and every photo that she take will be automatically transferred via wifi.
The setup is easy to do. You can do it in a few minutes and forget about it. The only draw back is that he will have to inform her about the behavior of BTSync (for instant, if she remove a photo on her phone it will be remove on her computer…etc.). If he desire he could go one step farther and make his photos sync to the same folder. That way, every photo that he takes will be automatically transferred to her phone and desktop, but her photos will also be transferred to his phone.
Thanks for the tip but as aroid pointed out, there are potential dangers to using syncing so it’s not something I do. I prefer to copy things manually and stay in complete control.
Well, since syncing often includes performing a
checksum of the synced files, I dare say it’s
safer than regular copying. With simple copying,
you may not notice if transfer was interrupted so
you may end up removing things from your phone
only to find out you’ve only got corrupted files
left on the desktop/server/NAS.
My one-way photo/video transfer job has been very useful for this. I always run it manually an extra time as a precaution if I need to free some space on my phone. Restarting a sync task when there’s nothing to report takes a short time, verifies all destination files match the source and reports that nothing needed to be done. With rsync as the backend, you can decide whether to just check file sizes or checksums, and BTSync always does at least a checksum.
Syncing is also wireless so all I have to do is to be within my WiFi range for it to work (haven’t tried it over a VPN, but that should work too).
For copying files between phones/tablets and the PC using FTP, I use a little Android app named eShare: (with all the caveats Henrik points out.)
The configuration is minimal and it’s use is really practical.
By using a fixed IP on the phone or a reserved IP for
it on the router, I can add a bookmark in Nemo or
In order to use it, I just have to start the app on the phone, tap one button to enable Wifi and another one to start sharing, then go to my file manager on the PC and use the bookmark to directly access the FTP folder on the phone. I can get speed transfers for around ~1,5MB/s, enough for small and medium size files.
Closing the app is as easy as tapping back the same buttons again and then exit the application.
No cables needed… Hope this can help, Joe!
Hi, Joe and Paddy,
As always, I’m thoroughly enjoying the show; thanks so much for everything you put into it. I want to take a few minutes to post some thoughts on a few topics…
#1 Podcast contents. A few shows back, you asked for feedback about the contents of the podcast, and if you should branch out into other topics. I am extremely happy with the topics that you cover and the pace with which you cover them. I can’t stand to listen to many of the other Linux podcasts (won’t mention names), because they ramble on endlessly about “nothing and everything all at once”. I greatly appreciate the balance you strike between being conversational and staying focused while moving things along. Keep it as-is, please!
#2 Unity. I used Unity for many months, during 13.04 and
13.10, and there is one feature which is absolutely
amazing but curiously seems to go unnoticed. The HUD
(integrated menu search) is so very, very useful, it
amazes me that no one else has done it. It is extremely
useful for two reasons in particular:
1. General helpfulness, especially for less-frequently used or new programs.
2. Interchangeability between LibreOffice and MS Office. I have to use MS Office at work, and the Ribbon interface has become second-nature to me. When I sit at my home desktop, it takes me forever to find things using the old menu navigation. If xfce used Unity’s HUD it would be simply spectacular. For a demonstration of this, here is a video:
https://www.youtube.com/watch?v=SZU9XzJBgVc&t=3m43s (3:43 mark)
#3. KDE. I tried out openSUSE for about 5 minutes at a Linux convention recently and I was very impressed. I am definitely thinking about giving it a proper install to play around with it, despite your mutual hatred. :-)
#4. Luddites. I can appreciate that you know what you
like and you stick with it. However, there have been
times when I have felt like you might have been
borderline closed-minded. For example, the recent
comments about opening up Unity/KDE (can’t remember
which) and then promptly closing it – that’s not quite
“trying out the latest free and open source software”,
that’s more like dismissing it at first glance. I humbly
suggest that there is a healthy balance somewhere
a) the idea that it is not good to have new just for the sake being new/different and
b) seemingly not liking something simply because it is new/different.
For example: not liking Firefox 29 because you can’t get it just the way you like it while simultaneously disliking KDE because there are too many customization options. Is there no pleasing a luddite? :-)
Anyway, you guys are awesome, and I am excited to hear every show. Then, when it is over, I wish I could keep hearing more. Thanks again!
One feature of KDE that I rarely see expoused is its resemblence to Windows Vista. No, seriously, stop laughing for a moment. I have Linux Mint 13 KDE on my work desktop. It handles multiple monitors decently, better than most desktops. My work desktop has enough resources to handle the bells and whistles–which can be turned off. I like the confingurability of KDE because I want a desktop that I can configure to do exactly what I want. But the thing that I really appreciate–and this is definitely not the left-handed compliment it sounds like–is that KDE reminds me of Windows Vista, which I associate with work. That faux-aero desktop makes me feel like I should be working.
When not working, I use Xfce and fluxbox because KDE isn’t something I’d toss on just any computer. KDE is a nice tool to wield in some circumstances. I’m glad it is an option. I also have to applaud the KDE dev team for trying new things, like KDE Connect. By all means, if you have the hardware, I’d recommend taking the time to configure it so you can really get a sense of what it is capable of, and then giving it an honest try.
I have mixed feelings about OpenMamba and other software presenting the GPL as if it were a EULA. On the one hand, I appreciate that it informs the user of the license, and (if they read it), the rights they are entitled to upon receipt (basically the four freedoms). My problem is that by asking the user to accept those terms, it is implied that this is a necessary condition for the use of the software. However, by my understanding of the GPL, its terms only require assent by the end user when they redistribute (hence the traditional “COPYING” name for the file containing the license). If the user is not going to redistribute, and simply going to use the software, no acceptance is required. So maybe just an informative “This is GPLd” without requiring an “I accept” button would be best.
Still loving the show and your down to earth approach to everything. I’m half way through episode 17 and can’t wait to get to work tomorrow to hear the other half.
Picking up on what you were saying about transferring files from/to Android devices. I’ve had problems in the past and used gMTP which seemed to work ok on some devices. However recently I’ve been using Linux Mint with Cinnamon, and Nemo does it all for me. I have a rooted Samsung Tab2 and a stock Motorola G and Nemo sees them as devices with internal storage and allows direct file transfer by cable.
A third person would lighten your load but it might be difficult to keep the chemistry. He would have to be another luddite though. You could interview each applicant as a guest presenter on those terms and at the end, if anyone person stood out, give them the job. Any not so good could provide audio clips for lighter sound bites.
Keep up the good work you guys.
As a non programmer who wants to look into programming stuff for linux, which language should i look into? I am confused after listening to your last episode. ;)
I would start with Python. There are really good tutorials and documentation, and the learning curve is not too steep.
And if you do decide to dip your toes in with Python
– as suggested by Nathan – you could do a lot worse
than following this online course before handing over
any of your hard-earned for a ‘teach yourself Python’
sort of book:
Ooh, thanks for this hint! I started one of these courses a few hours ago and am already quite impressed with the epiphany frequency :-)
About (an)other host(s): Please do experiment & settle into the format you conclude to be best. The luddite listeners will cope ;-)
Mark Pilgrims “Dive into Python” is another excellent resource. You may have a bit of trouble finding it since Mark cut himself off from the world a couple of years ago and pulled al his web stuff down.
Well it looks as though someone resurrected it at http://www.diveintopython.net/
Can recommend dive-into-python too, Also suggest finding something you find fun like audio manipulation, graphics, games, webdev? etc… Python on its own I found a bit too dry.
After figuring out basics (lists, strings etc), I started out by doing a simple platform-game with Python & PyGame and learnt a _lot_ this way.
I’ve been using Linux for nearly 20 years, since 1994. I remember when the UNIX Hater’s Handbook came out. That really was a different time for UNIX. Linux barely existed. I professionally used SunOS, Solaris, and HP/UX and, at university, used Ultrix and OSF/1 (Digital UNIX).
Truly, those commercial UNIXes were awful from the perspective of user experience.
Out of the box, they all had a horrible desktop environment. CDE, HP Vue, and whatever that horrible SunOS window manager was… they were universally terrible. Your shell was probably something like “sh” (as in the Bourne Shell, none of this fancy GNU stuff) or “csh” (not even “tcsh”). Your C compiler was some crappy C89 compiler that was only useful for building a real compiler like gcc. I don’t recall there being an application to view GIF or JPG files, only useless X11 bitmaps. Your editor was vi, and not “vim”, but the real original “vi”. Ultrix didn’t even have shared libraries!
I remember spending hours taking a fresh workstation and getting zsh, fvwm, gcc, gdb, GNU make, Emacs, CVS, xv, and countless other programs built and installed. I remember building TeX/LaTeX and it taking the better part of a day to get functioning. I don’t recall HP/UX having a packaging system that anyone (except HP) actually used. SunOS/Solaris had “pkg”, but not much was available.
What we have now in Linux is, well, 20 years ahead of what we had when UNIX Hater’s Handbook came out. Pick one of the worst Linux distributions today and it’s hard to imagine it having as bad of an “out of the box” experience as a commercial UNIX back then. Tools like yum or apt-get, everything you could possibly want pre-built/packaged, a myriad of great desktop environments, … We have absolutely come a long way.
My first *nix experience was in 1990 with SCO Xenix. My main job was writing a stock control program in Foxbase. The only editor available at the start was vi. This was the start of a lifelong hatred of the wretched program:) Eventually we managed to get Joe (a Wordstar clone) compiled, which made life much easier. Unix in those days was horrible compared to DOS. It might have been powerful, but it was virtually unusable by anone except an expert. The same company for whom I was writing the Foxbase program also had a computer running VMS, which on my slight acquaintance seemed much nicer than Unix.
Did I really hear Paddy say “the biggest fastest supercomputer runs Ubuntu”? Wouldn’t it run even faster if it used Arch or Gentoo?
The Tianhe-2 supercomputer appears to have achieved
the #1 ranking whilst running Kylin Linux, developed
by China’s National University of Defense Technology.
Apparently Ubuntu Server, OpenStack and Juju have
been (or are now being) ported over to run on it.
Whether that’s natively or virtualised is something
that none of the news stories I’ve seen actually
Hi guys, great episode!
A couple quick comments. You hit the nail on the head with the Snowden issue. The fact we still use Google and eBay, even though we know there are security issues, are great examples of the public’s unchanging mindset. My hat’s off to folks like Danny but they are in the minority. I wish I was as dedicated but I have given into the convenience of giving up my privacy to Google.
The problem is that your information will be out there whether you like it or not. For example your credit history and banking information is just waiting to be plundered even if you never access it with a computer! I know people that refuse to set up an online account with their bank but in my mind that only lengthens the time it takes for them to catch a problem. With an online account I can check my account weekly as opposed to waiting for the monthly statement in the mail.
Passwords are a puzzler, I fear without some sort of bio metrics or card readers we are stuck with passwords. I think passwords are quickly becoming less effective and are becoming the equivalent of anti-virus software: feel-good solutions that don’t offer the security they advertise.
I finally got my C710 Chromebook unbricked and have given in to just using it as a Chromebook. I suspect I will pick up a netbook or ultrabook down the road to put Linux on but for now I’m happy using Chrome on the C710.
After the last show I tried Lubuntu on my old desktop and I am very happy with it. I’ll probably stick with Lubuntu for the foreseeable future. I have installed Open Office (I have it on my Windows laptop as well as an organization I work with uses it) and Chrome (I know, what can I say I sold out).
Interesting discussion on phones. My iPhone 4 is nearing end of life (the upcoming system update won’t support the older iPhone 4) so I guess I start researching replacements. Of course if I go to an Android (or something else) I loose access to the apps I have bought, not the end of the world but something to consider. In the end the only real smart phone options are Android or iPhone so I may end up with another iPhone…
I really enjoy the chemistry the two of you have but I don’t think a third host would be a problem, if it was the right “Luddite.” Good luck on your search.
Jason in Virginia
I forgot to comment on the “gamer” news story. I
personally think the term “gamer” should apply to console
users (PS3, XBox, the Atari 2600 I had as a child) and
those with PCs or laptops built for gaming. I don’t think
smart phone or tablet users should count, the term should
refer to those that buy a device primarily to play games
On the topic of watching Youtube videos of others playing games I would like to offer some insight. My son watches these and his explaination is that he enjoys the “commentary” of the recorder. Basically these videos often have ongoing comments by the recorder which the viewers find funny. It seems like a waste of time to me but on reflection I like to listen to the commentary tracks on DVDs so I seem to have a double standard regarding the subject.
Jason in Virginia
A lot of people buy smartphones and tablets with the intention of mostly playing games.
A lot of people buy smartphones for email and
facebook, wikipedia lookups, train schedules, and
will only occasionally if at all play a game on
A lot of people buy PC’s for more or less the same reason. Then there’s the whole crowd who have PC’s and smartphones for work.
Put simply, I think the term gamer is best applied to people who’s gaming takes up a significant amount of their spare time, regardless of what device or lack thereof they use in the process.
People addicted to Farmville would certainly be considered a gamer in my book, despite not needing a console to play it on.
s/who’s gaming/whose gaming/
Regarding Mr Snowden
If we’re a little more educated as a result of his actions, then hopefully there’s less chance that some of us may become dumb users in front of dumb terminals. Perhaps many have not change their on line habits since, maybe there’s little reason to, but at least we’ve been given a chance to decide for ourselves. And, if some of us suffer as a result of the decisions we make, then there’s little point in crying “we didn’t know”, because most of us have been informed.
Really enjoy the podcast.
I have also been experiencing my screen going blank periodically in Xubuntu 14.04 but it only happened when the monitor was connected via HDMI. I have two monitors conneced, one HDMI and one DVI and it was only the HDMI monitor that had the issue. Once I switched the problem monitor over to DVI, it issue went away! I’m not sure if the issue was with the HDMI signal or a bad port/cable but it may be worth trying if anyone is still having this issue with a HDMI connected monitor.
I mostly use laptops so it definitely isn’t an HDMI issue for me.
I’m curious to know what you meant when you said that “people who run btrfs get what they deserve”.
What exactly do we deserve?
I started using btrfs after reading this very compelling
article on arstechnica:
I had only one issue with implementation, which
ultimately may have been purely cosmetic and benign
Presumably you have some concerns about it, and I’d like to know what they are since I generally appreciate your insights very much. From what I can tell there are virtually no downsides to btrfs.
Hi Joel – I was a little flippant in my comment, but my concerns are really twofold (and neither relates to the functionality that these sorts of file systems promise, which is very welcome).
Firstly, Btrfs hasn’t been around long enough to be considered production-ready; IMHO you really need a good few years after a stable release (and it hasn’t even got /there/ yet) before you ought to think about considering a file system trustworthy. Perhaps I’m unduly cautious – I only started using ext4 within the last year or so after all – but I’m happy to let others live on the bleeding edge ;)
Secondly, and purely anecdotally, I did run Btrfs briefly a while back to see how it performed, and suffered hugely with disk thrashing issues. Googling at the time showed that I wasn’t alone, but I’ve not looked at it recently so this could well be something that’s long been fixed.
Since every man and his dog seem to be rushing towards containerisation to solve workload management issues, I can see that file systems that support snapshotting and copy on write are going to become increasingly important and more widely deployed, which ought to help speed the development of Btrfs and the like, so maybe we’ll see a shorter path to stability than we have with other file systems in the past.
What a great episode. I am not a programmer, but I understood and followed along quite well, thanks to Joe and Paddy’s clarity on the air, and preparation before the show.
In regards to the issue of whether the show needs a third host, I don’t think it does. The show is really good, and I don’t think the Luddites give themselves enough credit for how well their agenda is outlined and executed when recording it. It’s a pretty professional podcast by two guys who don’t consider themselves professionals in tech journalism and commentary.
I was a bit too hard on the nice guys over at Mint Cast for their casual opening warm up chat and their overall slower pace. Podcasting is bloody difficult. I only have time to listen to 5 shows. Considering that two of them are relatively big budget (Android Central and IGN Beyond), I am impressed that Luddites comes very close to the big boys in polish, pace, flow and execution. You guys are better than you realize. Keep up the great work.
Good luck casting a third chair, should you feel you need one. As a Yank, I’d love to lend my semi-radio voice to the show, but I’m just a Linux end-user (and an annoying Newcastle United supporter). I think if the show added someone, it should be either a super user or another programmer or project manager. Perhaps someone with more interest in gaming would be helpful as well. A more playful user, perhaps. Of course anyone cheery would sweeten what I think is a wonderfully bitter and edgy show. Linux Luddies is like my black coffee of podcasts. What I’m saying is, you got a distinctive style. You might not want to mess with it. The lack of silence in the show tells me that it is well outlined and written, and you guys are fully on when producing it. So it isn’t broken. The question is, could it use a third viewpoint. Time will tell. Maybe you should first cast guests. Try before you commit.
NixOS and GoboLinux: Paddy and Joe thanks much for the reviews, I really enjoyed them.
All the coverage of Docker of late has rekindled my interest in software deployment technologies. The Nix technology was new to me but I had heard about GoboLinux awhile back and had considered taking a look at it myself. Thanks for taking that bullet for me (and others) :-)
Apropos of nothing, have you guys heard about Fedora.next? I saw the long video about it a while back, but I still have no idea what it is about. Word is that “it” is splitting “it” up between Workstation, Server, and Cloud. But I have no idea what any of that is. There is also talk of getting rid of spins and making Gnome default. Allegedly. They want to have a default base for developers to make things for each of them. Or something.
Hi guys, thanks for reading out my data recovery e-mail. After sending it I went back and tried again, realising that photorec is only a sub-part of the larger testdisk suite, and that testdisk has it’s own data recovery process – not as it sounds just disk testing! Testdisk also took an age, but found all of the data I’d potentially lost and allowed you to recover files and folders while keeping the correct names and folder structure. So to answer my own question; testdisk is the tool for data recovery.
Testdisk is awesome :) I usually use it on an image
created with ddrescue (GNU spin of dd_rescue).
Btw, the reason you may not get a correct folder structure or file names depends on whether the file table is still intact. Without that original table (or a backup), files are just anonymous byte streams on the disk.
would love to be a host on the show been listening since you started and like your viewpoints will be sending in a clip as soon as able. You don’t really need a full time third host as you balance each other out well with the differing view points. Where better to get a chance to start out in podcasting than IMHO one of the best linux/unix podcasts on the interwebs.
Thanks for the podcast, I think this may be the most educational thing I listen to on a regular basis!
I also had real problems with connecting my Nexus 5 to my PC (which runs on Arch Linux). After a lot of time wasted looking around on various forums for a solution, I installed gvfs-mtp using Pacman and it worked perfectly. I’m not sure if this would work on all distros, but if you use Arch then that’s my recommendation.
P.S. Also always check the wiki first, I felt really stupid spending ages trying one method which, when I looked it up on the wiki, plainly said “this is unlikely to work”…
Comments are now closed.
The content of this website, and that of the podcasts produced by the website owners, is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.