Noticeably absent from the trial and much of the media attention are the phone companies. Did they know their networks could be so systematically abused? Did they care?
In any case, the public has never been fully informed about how phones have been hacked. Speculation has it that phone hackers were guessing PIN numbers for remote voicemail access, typically trying birthdates and inappropriate PIN numbers like 0000 or 1234.There is more to it
Those in the industry know that there are additional privacy failings in mobile networks, especially the voicemail service. It is not just in the UK either.
There are various reasons for not sharing explicit details on a blog like this and comments concerning such techniques can't be accepted.
Nonetheless, there are some points that do need to be made:
- it is still possible for phones, especially voicemail, to be hacked on demand
- an attacker does not need expensive equipment nor do they need to be within radio range (or even the same country) as their target
- the attacker does not need to be an insider (phone company or spy agency employee)
The bottom line is that the only way to prevent voicemail hacking is to disable the phone's voicemail service completely. Voicemail is not really necessary given that most phones support email now. For those who feel they need it, consider running the voicemail service on your own private PBX using free software like Asterisk or FreeSWITCH. Some Internet telephony service providers also offer third-party voicemail solutions that are far more secure than those default services offered by mobile networks.
To disable voicemail, simply do two things:
- send a letter to the phone company telling them you do not want any voicemail box in their network
- in the mobile phone, select the menu option to disable all diversions, or manually disable each diversion one by one (e.g. disable forwarding when busy, disable forwarding when not answered, disable forwarding when out of range)
When I first started using computers a “word processor” was a program that edited text. The most common and affordable printers were dot-matrix and people who wanted good quality printing used daisy wheel printers. Text from a word processor was sent to a printer a letter at a time. The options for fancy printing were bold and italic (for dot-matrix), underlines, and the use of spaces to justify text.
It really wasn’t much good if you wanted to include pictures, graphs, or tables. But if you just wanted to write some text it worked really well.
When you were editing text it was typical that the entire screen (25 rows of 80 columns) would be filled with the text you were writing. Some word processors used 2 or 3 lines at the top or bottom of the screen to display status information.
Some time after that desktop publishing (DTP) programs became available. Initially most people had no interest in them because of the lack of suitable printers, the early LASER printers were very expensive and the graphics mode of dot matrix printers was slow to print and gave fairly low quality. Printing graphics on a cheap dot matrix printer using the thin continuous paper usually resulted in damaging the paper – a bad result that wasn’t worth the effort.
When LASER and Inkjet printers started to become common word processing programs started getting many more features and basically took over from desktop publishing programs. This made them slower and more cumbersome to use. For example Star Office/OpenOffice/LibreOffice has distinguished itself by remaining equally slow as it transitioned from running on an OS/2 system with 16M of RAM in the early 90′s to a Linux system with 256M of RAM in the late 90′s to a Linux system with 1G of RAM in more recent times. It’s nice that with the development of PCs that have AMD64 CPUs and 4G+ of RAM we have finally managed to increase PC power faster than LibreOffice can consume it. But it would be nicer if they could optimise for the common cases. LibreOffice isn’t the only culprit, it seems that every word processor that has been in continual development for that period of time has had the same feature bloat.
The DTP features that made word processing programs so much slower also required more menus to control them. So instead of just having text on the screen with maybe a couple of lines for status we have a menu bar at the top followed by a couple of lines of “toolbars”, then a line showing how much width of the screen is used for margins. At the bottom of the screen there’s a search bar and a status bar.Screen Layout
By definition the operation of a DTP program will be based around the size of the paper to be used. The default for this is A4 (or “Letter” in the US) in a “portrait” layout (higher than it is wide). The cheapest (and therefore most common) monitors in use are designed for displaying wide-screen 16:9 ratio movies. So we have images of A4 paper with a width:height ratio of 0.707:1 displayed on a wide-screen monitor with a 1.777:1 ratio. This means that only about 40% of the screen space would be used if you don’t zoom in (but if you zoom in then you can’t see many rows of text on the screen). One of the stupid ways this is used is by companies that send around word processing documents when plain text files would do, so everyone who reads the document uses a small portion of the screen space and a large portion of the email bandwidth.
Note that this problem of wasted screen space isn’t specific to DTP programs. When I use the Google Keep website  to edit notes on my PC they take up a small fraction of the screen space (about 1/3 screen width and 80% screen height) for no good reason. Keep displays about 70 characters per line and 36 lines per page. Really every program that allows editing moderate amounts of text should allow more than 80 characters per line if the screen is large enough and as many lines as fit on the screen.
One way to alleviate the screen waste on DTP programs is to use a “landscape” layout for the paper. This is something that all modern printers support (AFAIK the only printers you can buy nowadays are LASER and ink-jet and it’s just a big image that gets sent to the printer). I tried to do this with LibreOffice but couldn’t figure out how. I’m sure that someone will comment and tell me I’m stupid for missing it, but I think that when someone with my experience of computers can’t easily figure out how to perform what should be a simple task then it’s unreasonably difficult for the vast majority of computer users who just want to print a document.
When trying to work out how to use landscape layout in LibreOffice I discovered the “Web Layout” option in the “View” menu which allows all the screen space to be used for text (apart from the menu bar, tool bars, etc). That also means that there are no page breaks! That means I can use LibreOffice to just write text, take advantage of the spelling and grammar correcting features, and only have screen space wasted by the tool bars and menus etc.
I never worked out how to get Google Docs to use a landscape document or a single webpage view. That’s especially disappointing given that the proportion of documents that are printed from Google Docs is probably much lower than most word processing or DTP programs.What I Want
What I’d like to have is a word processing program that’s suitable for writing draft blog posts and magazine articles. For blog posts most of the formatting is done by the blog software and for magazine articles the editorial policy demands plain text in most situations, so there’s no possible benefit of DTP features.
The ability to edit a document on an Android phone and on a Linux PC is a good feature. While the size of a phone screen limits what can be done it does allow jotting down ideas and correcting mistakes. I previously wrote about using Google Keep on a phone for lecture notes . It seems that the practical ability of Keep to edit notes on a PC is about limited to the notes for a 45 minute lecture. So while Keep works well for that task it won’t do well for anything bigger unless Google make some changes.
Google Docs is quite good for editing medium size documents on a phone if you use the Android app. Given the limitations of the device size and input capabilities it works really well. But it’s not much good for use on a PC.
I’ve seen a positive review of One Note from Microsoft . But apart from the fact that it’s from Microsoft (with all the issues that involves) there’s the issue of requiring another account. Using an Android phone requires a Gmail account (in practice for almost all possible uses if not in theory) so there’s no need to get an extra account for Google Keep or Docs.
What would be ideal is an Android editor that could talk to a cloud service that I run (maybe using WebDAV) and which could use the same data as a Linux-X11 application.
-  http://keep.google.com/
-  http://etbe.coker.com.au/2014/04/19/phone-based-lectures/
-  http://www.onenote.com/
After dealing with the client side of the DNSSEC puzzle last week, I thought it behooved me to also go about getting DNSSEC going on the domains I run DNS for. Like the resolver configuration, the server side work is straightforward enough once you know how, but boy howdy are there some landmines to be aware of.
One thing that made my job a little less ordinary is that I use and love tinydns. It’s an amazingly small and simple authoritative DNS server, strong in the Unix tradition of “do one thing and do it well”. Unfortunately, DNSSEC is anything but “small and simple” and so tinydns doesn’t support DNSSEC out of the box. However, Peter Conrad has produced a patch for tinydns to do DNSSEC, and that does the trick very nicely.
A brief aside about tinydns and DNSSEC, if I may… Poor key security is probably the single biggest compromise vector for crypto. So you want to keep your keys secure. A great way to keep keys secure is to not put them on machines that run public-facing network services (like DNS servers). So, you want to keep your keys away from your public DNS servers. A really great way of doing that would be to have all of your DNS records somewhere out of the way, and when they change regenerate the zone file, re-sign it, and push it out to all your DNS servers. That happens to be exactly how tinydns works. I happen to think that tinydns fits very nicely into a DNSSEC-enabled world. Anyway, back to the story.
Once I’d patched the tinydns source and built updated packages, it was time to start DNSSEC-enabling zones. This breaks down into a few simple steps:
Generate a key for each zone. This will produce a private key (which, as the name suggests, you should keep to yourself), a public key in a DNSKEY DNS record, and a DS DNS record. More on those in a minute.
One thing to be wary of, if you’re like me and don’t want or need separate “Key Signing” and “Zone Signing” keys. You must generate a “Key Signing” key – this is a key with a “flags” value of 257. Doing this wrong will result in all sorts of odd-ball problems. I wanted to just sign zones, so I generated a “Zone Signing” key, which has a “flags” value of 256. Big mistake.
Also, the DS record is a hash of everything in the DNSKEY record, so don’t just think you can change the 256 to a 257 and everything will still work. It won’t.
Add the key records to the zone data. For tinydns, this is just a matter of copying the zone records from the generated key into the zone file itself, and adding an extra pseudo record (it’s all covered in the tinydnssec howto).
Publish the zone data. Reload your BIND config, run tinydns-sign and tinydns-data then rsync, or do whatever it is PowerDNS people do (kick the database until replication starts working again?).
Test everything. I found the Verisign Labs DNSSEC Debugger to be very helpful. You want ticks everywhere except for where it’s looking for DS records for your zone in the higher-level zone. If there are any other freak-outs, you’ll want to fix those – because broken DNSSEC will take your domain off the Internet in no time.
Tell the world about your DNSSEC keys. This is simply a matter of giving your DS record to your domain registrar, for them to add it to the zone data for your domain’s parent. Wherever you’d normally go to edit the nameservers or contact details for your domain, you probably want to do to the same place and look for something about “DS” or “Domain Signer” records. Copy and paste the details from the DS record in your zone into there, submit, and wait a minute or two for the records to get published.
Test again. Before you pat yourself on the back, make sure you’ve got a full board of green ticks in the DNSSEC Debugger. if anything’s wrong, you want to rollback immediately, because broken DNSSEC means that anyone using a DNSSEC-enabled resolver just lost the ability to see your domain.
That’s it! There’s a lot of complicated crypto going on behind the scenes, and DNSSEC seems to revel in the number of acronyms and concepts that it introduces, but the actual execution of DNSSEC-enabling your domains is quite straightforward.
In short: I love my MacBook Air. It is the best (laptop) hardware I ever owned. I have seen hardware which was much more flaky in the past. I can set the display backlight to zero via software, which saves me a lot of battery life and also offers a bit of anti-spy-acroos-my-shoulder support. WLAN and bluetooth work nicely.
And I just love the form-factor and the touch-feeling of the hardware. I even had the bag I use to carry my braille display modified so that the Air just fits in.
I can't say how it behaves with X11. Given how flaky accessibility with graphical desktops on Linux is, I have still not made the switch. My MacBookAir is my perfect mobile terminal, I LOVE it.
I am sort of surprised about the recent rant of Paul about MacBook Hardware. It is rather funny that we perceive the same technology so radically different.
And after reading the second part of his rant I am wondering if I am no longer allowed to consider myself part of the "hardcore F/OSS world", because I don't consider Apple as evil as apparently most others. Why? Well, first of all, I actually like the hardware. Secondly, you have to show me a vendor first that builds usable accessibility into their products, and I mean all their products, without any extra price-tag attached. Once the others start to consider people with disabilities, we can talk about apple-bashing again. But until then, sorry, you don't see the picture as I do.
Apple was the first big company on the market to take accessibility seriously. And they are still unbeaten, at least when it comes to bells and whistles included. I can unbox and configure any Apple product sold currently completely without assisstance. With some products, you just need to know a signle keypress (tripple-press the home button for touch devices and Cmd+F5 for Mac OS/X), and with others, during initial bootup, a speech synthesizer even tells you how to enable accessibility in case you need it.
And after that is enabled, I can perform the setup of the device completely on my own. I don't need help from anyone else. And after the setup is complete, I can use 95% of the functionality provided by the operating system.
And I am blind, part of a very small margin group so to speak.
In Debian circles, I have even heard the sentiment that we supposedly have to accept that small margin groups are ignored sometimes. Well, as long as we think that way, as long as we strictly think economically, we will never be able to go there, fully. And we will never be the universal operating system, actually. Sorry to say that, but I think there is some truth to it.
So, who is evil? Scratch your own itch doesn't always work to cover everything. How do we motivate contributors to work on things they don't personally need (yet)? How can we ensure that complicated but seldomly used features stay stable and do not fall to dust just because some upstream decides to rewrite an essential subcomponent of the dependency tree? I don't know. All I know is that these issues need to be solved in an universal operating system.
Near the beautiful Swedish town of Lindsborg, Kansas, there stands a hill known as Coronado Heights. It lies in the midst of the Smoky Hills, named for the smoke-like mist that sometimes hangs in them. We Kansans smile our usual smile when we tell the story of how Francisco Vásquez de Coronado famously gave up his search for gold after reaching this point in Kansas.
Anyhow, it was just over a year ago that Laura, Jacob, Oliver, and I went to Coronado Heights at the start of summer, 2013 — our first full day together as a family.
Atop Coronado Heights sits a “castle”, an old WPA project from the 1930s:
The view from up there is pretty nice:
And, of course, Jacob and Oliver wanted to explore the grounds.
As exciting as the castle was, simple rocks and sand seemed to be just as entertaining.
After Coronado Heights, we went to a nearby lake for a picnic. After that, Jacob and Oliver wanted to play at the edge of the water. They loved to throw rocks in and observe the splash. Of course, it pretty soon descended (or, if you are a boy, “ascended”) into a game of “splash your brother.” And then to “splash Dad and Laura”.
Fun was had by all. What a wonderful day! Writing the story reminds me of a little while before that — the first time all four of us enjoyed dinner and smores at a fire by our creek.
Jacob and Oliver insisted on sitting — or, well, flopping — on Laura’s lap to eat. It made me smile.
(And yes, she is wearing a Debian hat.)