Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 58 min 44 sec ago

Junichi Uekawa: Having fun with lisp.

9 July, 2014 - 08:18
Having fun with lisp. I was writing lisp interpreter in C++ using boost::spirit. I am happy that my eval can do lambda. Took me a long time to figure out what was wrong with the different types. The data structure was recursive, and I needed to make a recursive type. make_recursive_variant works but it's not obvious when it doesn't.

Wouter Verhelst: HP printers require systemd, apparently

9 July, 2014 - 02:15
printer-driver-postscript-hp Depends: hplip
hplip Depends: policykit-1
policykit-1 Depends: libpam-systemd
libpam-systemd Depends: systemd (= 204-14)

Since the last in the above is a versioned dependency, that means you can't use systemd-shim to satisfy this dependency.

I do think we should migrate to systemd. However, it's unfortunate that this change is being rushed like this. I want to migrate my personal laptop to systemd—but not before I have the time to deal with any fallout that might result, and to make sure I can properly migrate my configuration.

Workaround (for now): hold policykit-1 at 0.105-3 rather than have it upgrade to 0.105-6. That version doesn't have a dependency on libpam-systemd.

Off-hand questions:

  • Why does one need to log in to an init system? (yes, yes, it's probably a session PAM module, not an auth or password module. Still)
  • What does policykit do that can't be solved with proper use of Unix domain sockets and plain old unix groups?

All this feels like another case of overengineering, like most of the *Kit thingies.

Update: so the systemd package doesn't actually cause systemd to be run, there are other packages that do that, and systemd-shim can be installed. I misread things. Of course, the package name is somewhat confusing... but that's no excuse.

Matthew Palmer: Doing Password Complexity Wrong

8 July, 2014 - 13:00

I just made an account on yet another web service. On the suggestion of my password manager, I attempted to use the password “W:9[$X*F”. It was rejected because “Password must contain at least one non-alphabet character, one lowercase letter, one uppercase letter”. OK, how about “Passw0rd”? Yep, that’s fine.

Anyone want to guess which of those two passwords is going to fall victim to a brute-force attack first? Go on, don’t be shy, take a wild shot in the dark!

Joey Hess: laptop death

8 July, 2014 - 08:00

So I was at Ocracoke island, camping with family, and I brought my laptop along as I've done probably half a dozen times before. An enormous thuderstorm came up. It rained for 8 hours and thundered for 3 of those. Some lightning cracks quite close by as we crouched in the food tent, our feet up off the increasingly wet ground "just in case". The campground flooded. Luckily we were camped in the dunes and tents mostly avoided being flooded with 2-3 inches of water. (That was just the warmup; a hurricane hit a week after we left.)

My laptop was in my tent when this started, and I got soaked to the skin just running over there and throwing it up on the thermarest to keep it out of any flooding and away from any drips. It seemed ok, so best not to try to move it to the car in that downpour.

Next time I checked, it turned out the top vent of the tent was slightly open and dripping. The laptop bag was damp. But inside it seemed ok. Rain had slackened to just heavy, so I ran it down to the car. Laptop appeared barely damp, but it was hard to tell as I had quite forgotten what "dry" was. Turned it on for 10 seconds to check the time. It was 7:30 and we still had to cook dinner in this mess. Transferred it to a dry bag.

(By the way, in some situations, discovering you have a single dry towel you didn't know you had is the best gift in the world!)

Next morning, the laptop was dead. When powered on, the fan came on full, the screen stayed black, and after a few seconds it turned itself back off.

I need this for work, so it was a crash priority to get it fixed or a replacement. Before I even got home, I had logged onto Lenovo's website to check warantee status and found 2 things:

  1. They needed some number from a sticker on the bottom of my laptop. Which was no longer there.
  2. The process required some stange login on an entirely different IBM website.

At this point, I had a premonition of how the beuracracy would go. Reading Sesse's Blehnovo, I see I was right. I didn't even try. I ordered a replacement with priority shipping.

When I got home, I pulled the laptop apart to try to debug it. I still don't know what's wrong with it. The SSD may be damaged; it seems to cause anything I put it into to fail to work.

New laptop arrived in 2 days. Since this model is now a year old, it was a few hundred dollars cheaper this time around. And now I have an extra power supply, and a replacment keyboard, and a replacement fan etc. And I've escaped the dead USB port and broken rocker switch of the old laptop too.

The only weird thing is that, while my old laptop had no problem with my Toshiba passport USB drive, this new one refuses to recognize it unless I plug it into a USB 1.0 hub. Oh well..

Update: Ocracode for this trip:

OBX1.1 P6 L7 SA3d+++b+c++ U2(rearended,laptop death) T4f2b1 R2T Bb+m++++n++
F+++u++ SC+s-g6 H+f0i3 V+++s++m0 E++r+

Steinar H. Gunderson: Blehnovo

8 July, 2014 - 06:47

Here's my own little (ongoing) story about Lenovo's customer support; feel free to skip if you don't like rants. (You may remember that it took me several months to get to actually buy this laptop in the first place.) Everything within “quotes” are actual quotes from Lenovo, except where otherwise noted.

May 30th: My laptop accidentially goes into the ground, and the screen cracks. Gah. Oh well, I'll be without laptop over the weekend, but I have this nice accident warranty and NBD thing from Lenovo, right? I go to their support web site; they recommend that I register with IBM and file a service ticket. I do so. Their site says I will receive a confirmation email within ten minutes.

Jun 1st: I realize I haven't received anything from Lenovo or IBM, despite 36 hours passing. Oh well.

Jun 2nd: The web system claims Lenovo has “successfully contacted” me several times, despite me never hearing anything from them.

Jun 3rd: I call Lenovo. They don't speak any English. They say there's an error in the “type” I've given them; seemingly “X240” is an invalid type, I needed to write “20AL”. I get it corected.

Jun 4th: Lenovo calls. I talk to them in German and explain what happened (again). They say that I have the choice between paying €150 + parts and sending it in, or €450 + parts to have a serviceman come to me. (I am not 100% sure these numbers are correct, but they're in the right ballpark.) I say that this sounds very weird since I have accident insurance, but the guy from Lenovo seems unfazed and says they will only cover things under warranty if it's a design mistake. Eventually I say that sure, I'll pay for the serviceman; I just want my laptop fixed, fast. They ask for photos of the damage, which I send immediately.

Jun 6th: A week after the damage, and nothing has happened.

Jun 12th: Still nothing has happened. I press the “escalate” button on the web page.

Jun 18th: Still nothing has happened. I send Lenovo email asking what the heck is going on. My case now changes to “the customer will send the machine in to the depot for servicing” (not an exact quote; I don't have this text anymore), and I get an email with an address. I reply asking why on Earth this is, quoting their web page for saying “If you are entitled to Onsite Warranty, your Accidental Damage Protection claim may be repaired at your location”.

Later that day, Lenovo calls me again. It turns out they have no extended warranty or insurance registered on me. They ask me to provide “proof of purchase”, and give me a new case number (since the old one is now seemingly locked into a “will send to depot” situation). I send them the warranty email they originally sent me, including a long warranty code (20 alphanumeric digits) and a PIN. (In passing, I notice that due to a very delayed shipment, this warranty seemingly started running a month or so before I actually received the laptop, so the so-called 4-year warranty is seemingly 3 years 11 months. Oh well.)

Jun 19th: I am contacted by Lenovo. They say this information is not good enough as “proof of purchase”. They reiterate that I need to send them “proof of purchase”. I send them every single email I have ever received for them regarding my purchase.

Jun 24th: Nothing has happened. I email Lenovo asking for a status update. I get an email saying they have “forwarded all the needed information to the warranty service, so that the extended warranty will be registered”. All I can do is wait.

Jun 30th: I miss a telephone call from Lenovo. I get an email saying they'll close the case in two days. I call them, choosing English in the telephone menu. I get to a polite gentleman who speaks English well, but all the case notes are in German, so he can't make heads or tails of my case. He says he'll have the technician responsible for my case call me back.

He does really call me back the same day. He says what they have received is not valid as “proof of purchase”. I become agitated over the phone, pointing out that it should not be my problem if their internal systems are messed up; I've obviously paid 228 CHF for something. He claims to understand, but says that the systems will not work without a “proof of purchase”. He says I need to call Digital River (the company that operates shop.lenovo.ch). He gives me their telephone number. I think it looks funny, and asks him if this is really the right number; he says oh, no, that's the German one, not the Swiss one. He gives me the Swiss one. I call the number and it's for some completely different company, so I try the German one. It gives me a telephone menu, which says that for ThinkPad warranty questions, I need to call <some number>. I call that number; it's for Lenovo tech support in Germany. The tech in the other end of the line does not understand why Digital River would send me to Lenovo for warranty questions, but gives me their Swiss number and email address. The Swiss number is indeed correct, but just sends me to exactly the same menu. I send them an email.

On a whim, I check my warranty page on lenovo.com. It clearly says I have the extended warranty properly registered already! I forward a screenshot to Lenovo.

Jul 1st: I get an email from Lenovo: “Although the warranty appears on Lenovo website to be ok, please send us the proof of purchase from the extended warranty, so we can register it in our database(it appears NOT to be registered). Thank you.”

Jul 3rd: I get an email from Digital River, pointing me to a web page where I can print out some very nondescript-looking bill. I make a PDF out of it and send it to Lenovo.

Jul 7th: I still haven't heard anything from Lenovo. But! Now I am in Norway on vacation, which means I have a new trick up my sleeve: I call Lenovo Norway. I describe the case. The man says that this won't be covered by warranty, and I point out that I have accident insurance. He says (my translation/paraphrasing): “Oh, you're right, it does show up in this other system here! Don't worry, we'll fix this.” He asks me to send him an email with the screenshot of the warranty. I do so. He opens a new case, tells me that I'll have to send it in (seemingly onsite is only for warranty coverage after all?), but that it'll usually take less than a week. I receive an email with a link to DHL for ordering pickup, packaging instructions and pre-filled customs documents. It also has a form where I am supposed to briefly describe the case again (sure), say what I want them to do if the SSD is damaged (give it back to me unrepaired so I can do my own rescue; no Windows 8.1 reimaging, please) and write down all my passwords (fat chance).

So, there we are. Seven minutes with Lenovo Norway got me where 38 days of talking to Lenovo Switzerland/Germany couldn't—now let's just hope that DHL actually picks it up tomorrow and that I get it repaired and back within reasonable time.

The end? I hope.

Jonathan McDowell: 2014 SPI Board election nominations open

8 July, 2014 - 04:13

I put out the call for nominations for the 2014 Software in the Public Interest (SPI) Board election last week. At this point I haven't yet received any nominations, so I'm mentioning it here in the hope of a slightly wider audience. Possibly not the most helpful as I would hope readers who are interested in SPI are already reading spi-announce. There are 3 positions open this election and it would be good to see a bit more diversity in candidates this year. Nominations are open until the end of Tuesday July 13th.

The primary hard and fast time commitment a board member needs to make is to attend the monthly IRC board meetings, which are conducted publicly via IRC (#spi on the OFTC network). These take place at 20:00 UTC on the second Thursday of every month. More details, including all past agendas and minutes, can be found at http://spi-inc.org/meetings/. Most of the rest of the board communication is carried out via various mailing lists.

The ideal candidate will have an existing involvement in the Free and Open Source community, though this need not be with a project affiliated with SPI.

Software in the Public Interest (SPI, http://www.spi-inc.org/) is a non-profit organization which was founded to help organizations develop and distribute open hardware and software. We see it as our role to handle things like holding domain names and/or trademarks, and processing donations for free and open source projects, allowing them to concentrate on actual development.

Examples of projects that SPI helps includes Debian, LibreOffice, OFTC and PostgreSQL. A full list can be found at http://www.spi-inc.org/projects/.

Jan Wagner: Monitoring Plugins release ahead

7 July, 2014 - 21:41

It seems to be a great time for monitoring solutions. Some of you may have noticed that Icinga has released it's first stable version of the completely redeveloped Icinga 2.

After several changes in the recent past, where the Team maintaining the Plugins used for several Monitoring solutions was busy moving everything to new infrastructure, they are now back on track. The recent development milestone is reached and a call for testing was also sent out.

In the meanwhile I prepared the packaging for this bigger move. The packages are now moved to the source package monitoring-plugins, the whole packaging changes can be observed in the changelog. With this new release we have also some NEWS, which might be useful to check. Same counts for upstream NEWS.

You can give the packages a go and grab them from my 'unstable' and 'wheezy-backports' repositories at http://ftp.cyconet.org/debian/. Right after the stable release, the packages will be uploaded into debian unstable, but might get delayed by the NEW queue due the new package names.

Dominique Dumont: Status and next step on lcdproc automatic configuration upgrade with Perl and Config::Model

7 July, 2014 - 00:42

Back in March, I uploaded a Debian’s version of lcdproc with a unique feature: user and maintainer configurations are merged during package upgrade: user customizations and developers enhancements are both preserved in the new configuration file. (See this blog for more details). This avoids tedious edition of the configuration LCDd.conf file after every upgrade of lcdproc package.

At the beginning of June, a new version of lcdproc (0.5.7-1) was uploaded. This triggered another round of automatic upgrades on user’s systems.

According to the popcon rise of libconfig-model-lcdproc-perl, about 100 people have upgraded lcdproc on their system. Since automatic upgrade has an opt-out feature, one cannot say for sure that 100 people are actually using automatic upgrade, but I bet a fair portion are them are.

So far, only one people has complained: a bug report was filed about the many dependencies brought by libconfig-model-lcdproc-perl.

The next challenge for lcdproc configuration upgrade is brought by a bug reported on Ubuntu: the device file provided by imon kernel module is a moving target: The device file created by the kernel can be /dev/lcd0 or /dev/lcd1 or even /dev/lcd2. Static configuration files and moving target don’t mix well.

The obvious solution is to provide a udev rule so that a symbolic link is created from a fixed location (/dev/lcd-imon) to the moving target. Once the udev rule is installed, the user only has to update LCDd.conf file to use the symlink as imon device file and we’re done.

But, wait… The whole point of automatic configuration upgrade is to spare the user this kind of trouble: the upgrade must be completely automatic.

Moreover, the upgrade must work in all cases: whether udev is available (Linux) or not. If udev is not available, the value present in the configuration file must be preserved.

To know whether udev is available, the upgrade tool (aka cme) will check whether the file provided by udev (/dev/lcd-imon) is present or not. This will be done by lcdproc postinst script (which is run automatically at the end of lcdproc upgrade). Which means that the new udev rule must also be
activated in the postinst script before the upgrade is done.

In other words, the next version of lcdproc (0.5.7-2) will:

  • Install a new udev rule to provide lcd-imon symbolic link
  • Activate this rule in lcdproc postinst script before upgrading the configuration (note to udev experts: yes, the udev rule is activated with “--action=change” option)
  • Upgrade the configuration by running “cme migrate” in lcdproc postinst script.

In the lcdproc configuration model installed by libconfig-model-lcdproc-perl, the “imon device” parameter is enhanced so that running cme check lcdproc or cme migrate lcdproc issues a warning if /dev/lcd-imon exists and if imon driver is not configured to use it.

This way, the next installation of lcdproc will deliver a fix for imon and cme will fix user’s configuration file without requiring user input.

The last point is admittedly bad marketing as users will not be aware of the magic performed by Config::Model… Oh well…

In the previous section, I’ve briefly mentioned that “imon_device” parameter is “enhanced” in lcdproc configuration model. If you’re not already bored, let’s lift the hood and see what kind of enhancements was added.

Let’s peek in lcdproc configuration file, LCDd.conf file which is used to generate lcdproc configuration model. You may remember that the formal description of all LCDd.conf parameters and their properties is generated from LCDd.conf to provide lcdproc configuration model. The comments in LCDd.conf follow a convention so that most properties of the parameters can be extracted from the comments. In the example below, the comments show that NewFirmware is a boolean value expressed as yes or no, the latter being the default :

# Set the firmware version (New means >= 2.0) [default: no; legal: yes, no]
NewFirmware=no

Back to the moving target. In LCDd.conf, imon device file parameter is declared this way:

# Select the output device to use
Device=/dev/lcd0

This means that device is a string where the default value is /dev/lcd0.

Which is wrong once the special udev rule provided with Debian packages is activated. With this rule, the default value must be /dev/lcd-imon.

To fix this problem, a special comment is added in the Debian version of LCDd.conf to tune further the properties of the device parameter:

# select the device to use
# {%
#   default~
#   compute
#     use_eval=1
#     formula="my $l = '/dev/lcd-imon'; -e $l ? $l : '/dev/lcd0';"
#     allow_override=1 -
#   warn_if:not_lcd_imon
#     code="my $l = '/dev/lcd-imon';defined $_ and -e $l and $_ ne $l ;"
#     msg="imon device does not use /dev/lcd-imon link."
#     fix="$_ = undef;"
#   warn_unless:found_device_file
#     code="defined $_ ? -e : 1"
#     msg="missing imon device file"
#     fix="$_ = undef;"
#   - %}
Device=/dev/lcd0

This special comment between “{%” and “%}” follows the syntax of Config::Model::Loader. A small configuration model is declared there to enhance the model generated from LCDd.conf file.

Here are the main parts:

  • default~ suppress the default value of the “device” parameter declared in the original LCDd.conf (i.e. “/dev/ldcd0“)
  • compute and the 3 lines below computes a default value for the device file. Since “use_eval” is true, the formula is evaluated as Perl code. This code will return /dev/lcd-imon if this file is found. Otherwise, /dev/lcd0 is returned. Hence, either /dev/lcd-imon or /dev/lcd0 will be used a as default value. allow_override=1 lets the user override this computed value
  • warn_if and the 3 lines below test the configured device file with the Perl instructions provided by the code parameter. There, the device value is available in the $_ variable. This code will return true if /dev/lcd-imon exists and if the configured device does not use it. This will trigger a warning that will show the specified message.
  • Similarly warn_unless and the 3 lines below warns the user if the configured device file is not found.

In both warn_unless and warn_if parts, the fix code snippet is run when by the command cme fix lcdproc and is used to “repair” the warning condition. In this case, the fix consists in resetting the device configuration value so the computed value above can be used.

cme fix lcdproc is triggered during package post install script installed by dh_cme_upgrade.

Come to think of it, generating a configuration model from a configuration file can probably be applied to other projects: for instance, php.ini and kdmrc are also shipped with detailed comments. May be I should make a more generic model generator from the example used to generate lcdproc model…

Well, I will do it if people show interest. Not in the form “yeah, that would be cool”, but in the form, “yes, I will use your work to generate a configuration model for project [...]“. I’ll let you fill the blank ;-)


Tagged: Config::Model, configuration, debian, lcdproc, Perl, upgrade

Eugene V. Lyubimkin: (Finland) FUUG foundation gives money for FLOSS development

6 July, 2014 - 23:46
You live in Finland? You work on a FLOSS project or a project helping FLOSS in a way or another? Apply for FUUG's limited sponshorship program! Rules and details (in Finnish): http://coss.fi/2014/06/27/fuugin-saatio-jakaa-apurahoja-avoimen-koodin-edistamiseksi/ .

Ian Campbell: Setting absolute date based Amazon S3 bucket lifecycles with curl

6 July, 2014 - 18:45

For my local backup regimen I use flexbackup to create a full backup twice a year and differential/incremental backups on a weekly/monthly basis. I then upload these to a new amazon S3 bucket for each half year (so each bucket corresponds to the a full backup plus the associated differentials and incrementals).

I then set the bucket's lifecycle to archive to glacier (cheaper offline storage) from the month after that half year has ended (reducing costs) and to delete it a year after the half ends. It used to be possible to do this via the S3 web interface but the absolute date based options seem to have been removed in favour of time since last update, which is not what I want. However the UI will still display such lifecycles if they are configured and directs you to the REST API to set them up.

I had a look around but couldn't any existing CLI tools to do this directly but I figured it must be possible with curl. A little bit of reading later I found that it was possible but it involved some faff calculating signatures etc. Luckily EricW has written Amazon S3 Authentication Tool for Curl (AKA s3curl) which automates the majority of that faff. The tool is "New BSD" licensed according to that page or Apache 2.0 license according to the included LICENSE file and code comments.

Setup

Following the included README setup ~/.s3curl containing your id and secret key (I called mine personal which I then use below).

Getting the existing lifecycle

Retrieving an existing lifecycle is pretty easy. For the bucket which I used for the first half of 2014:

$ s3curl --id=personal -- --silent http://$bucket.s3.amazonaws.com/?lifecycle | xmllint --format -
<?xml version="1.0" encoding="UTF-8"?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Rule>
    <ID>Archive and Expire</ID>
    <Prefix/>
    <Status>Enabled</Status>
    <Transition>
      <Date>2014-07-31T00:00:00.000Z</Date>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
    <Expiration>
      <Date>2015-01-31T00:00:00.000Z</Date>
    </Expiration>
  </Rule>
</LifecycleConfiguration>

See GET Bucket Lifecycle for details of the XML.

Setting a new lifecycle

The desired configuration needs to be written to a file. For example to set the lifecycle for the bucket I'm going to use for the second half of 2014:

$ cat s3.lifecycle
<LifecycleConfiguration>
  <Rule>
    <ID>Archive and Expire</ID>
    <Prefix/>
    <Status>Enabled</Status>
    <Transition>
      <Date>2015-01-31T00:00:00.000Z</Date>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
    <Expiration>
      <Date>2015-07-31T00:00:00.000Z</Date>
    </Expiration>
  </Rule>
</LifecycleConfiguration>
$ s3curl --id=personal --put s3.lifecycle --calculateContentMd5 -- http://$bucket.s3.amazonaws.com/?lifecycle

See PUT Bucket Lifecycle for details of the XML.

Daniel Pocock: News team jailed, phone hacking not fixed though

6 July, 2014 - 16:20

This week former News of the World executives were sentenced, most going to jail, for the British phone hacking scandal.

Noticeably absent from the trial and much of the media attention are the phone companies. Did they know their networks could be so systematically abused? Did they care?

In any case, the public has never been fully informed about how phones have been hacked. Speculation has it that phone hackers were guessing PIN numbers for remote voicemail access, typically trying birthdates and inappropriate PIN numbers like 0000 or 1234.

There is more to it

Those in the industry know that there are additional privacy failings in mobile networks, especially the voicemail service. It is not just in the UK either.

There are various reasons for not sharing explicit details on a blog like this and comments concerning such techniques can't be accepted.

Nonetheless, there are some points that do need to be made:

  • it is still possible for phones, especially voicemail, to be hacked on demand
  • an attacker does not need expensive equipment nor do they need to be within radio range (or even the same country) as their target
  • the attacker does not need to be an insider (phone company or spy agency employee)
Disable voicemail completely - the only way to be safe

The bottom line is that the only way to prevent voicemail hacking is to disable the phone's voicemail service completely. Voicemail is not really necessary given that most phones support email now. For those who feel they need it, consider running the voicemail service on your own private PBX using free software like Asterisk or FreeSWITCH. Some Internet telephony service providers also offer third-party voicemail solutions that are far more secure than those default services offered by mobile networks.

To disable voicemail, simply do two things:

  • send a letter to the phone company telling them you do not want any voicemail box in their network
  • in the mobile phone, select the menu option to disable all diversions, or manually disable each diversion one by one (e.g. disable forwarding when busy, disable forwarding when not answered, disable forwarding when out of range)

Russell Coker: Desktop Publishing is Wrong

6 July, 2014 - 14:53

When I first started using computers a “word processor” was a program that edited text. The most common and affordable printers were dot-matrix and people who wanted good quality printing used daisy wheel printers. Text from a word processor was sent to a printer a letter at a time. The options for fancy printing were bold and italic (for dot-matrix), underlines, and the use of spaces to justify text.

It really wasn’t much good if you wanted to include pictures, graphs, or tables. But if you just wanted to write some text it worked really well.

When you were editing text it was typical that the entire screen (25 rows of 80 columns) would be filled with the text you were writing. Some word processors used 2 or 3 lines at the top or bottom of the screen to display status information.

Some time after that desktop publishing (DTP) programs became available. Initially most people had no interest in them because of the lack of suitable printers, the early LASER printers were very expensive and the graphics mode of dot matrix printers was slow to print and gave fairly low quality. Printing graphics on a cheap dot matrix printer using the thin continuous paper usually resulted in damaging the paper – a bad result that wasn’t worth the effort.

When LASER and Inkjet printers started to become common word processing programs started getting many more features and basically took over from desktop publishing programs. This made them slower and more cumbersome to use. For example Star Office/OpenOffice/LibreOffice has distinguished itself by remaining equally slow as it transitioned from running on an OS/2 system with 16M of RAM in the early 90′s to a Linux system with 256M of RAM in the late 90′s to a Linux system with 1G of RAM in more recent times. It’s nice that with the development of PCs that have AMD64 CPUs and 4G+ of RAM we have finally managed to increase PC power faster than LibreOffice can consume it. But it would be nicer if they could optimise for the common cases. LibreOffice isn’t the only culprit, it seems that every word processor that has been in continual development for that period of time has had the same feature bloat.

The DTP features that made word processing programs so much slower also required more menus to control them. So instead of just having text on the screen with maybe a couple of lines for status we have a menu bar at the top followed by a couple of lines of “toolbars”, then a line showing how much width of the screen is used for margins. At the bottom of the screen there’s a search bar and a status bar.

Screen Layout

By definition the operation of a DTP program will be based around the size of the paper to be used. The default for this is A4 (or “Letter” in the US) in a “portrait” layout (higher than it is wide). The cheapest (and therefore most common) monitors in use are designed for displaying wide-screen 16:9 ratio movies. So we have images of A4 paper with a width:height ratio of 0.707:1 displayed on a wide-screen monitor with a 1.777:1 ratio. This means that only about 40% of the screen space would be used if you don’t zoom in (but if you zoom in then you can’t see many rows of text on the screen). One of the stupid ways this is used is by companies that send around word processing documents when plain text files would do, so everyone who reads the document uses a small portion of the screen space and a large portion of the email bandwidth.

Note that this problem of wasted screen space isn’t specific to DTP programs. When I use the Google Keep website [1] to edit notes on my PC they take up a small fraction of the screen space (about 1/3 screen width and 80% screen height) for no good reason. Keep displays about 70 characters per line and 36 lines per page. Really every program that allows editing moderate amounts of text should allow more than 80 characters per line if the screen is large enough and as many lines as fit on the screen.

One way to alleviate the screen waste on DTP programs is to use a “landscape” layout for the paper. This is something that all modern printers support (AFAIK the only printers you can buy nowadays are LASER and ink-jet and it’s just a big image that gets sent to the printer). I tried to do this with LibreOffice but couldn’t figure out how. I’m sure that someone will comment and tell me I’m stupid for missing it, but I think that when someone with my experience of computers can’t easily figure out how to perform what should be a simple task then it’s unreasonably difficult for the vast majority of computer users who just want to print a document.

When trying to work out how to use landscape layout in LibreOffice I discovered the “Web Layout” option in the “View” menu which allows all the screen space to be used for text (apart from the menu bar, tool bars, etc). That also means that there are no page breaks! That means I can use LibreOffice to just write text, take advantage of the spelling and grammar correcting features, and only have screen space wasted by the tool bars and menus etc.

I never worked out how to get Google Docs to use a landscape document or a single webpage view. That’s especially disappointing given that the proportion of documents that are printed from Google Docs is probably much lower than most word processing or DTP programs.

What I Want

What I’d like to have is a word processing program that’s suitable for writing draft blog posts and magazine articles. For blog posts most of the formatting is done by the blog software and for magazine articles the editorial policy demands plain text in most situations, so there’s no possible benefit of DTP features.

The ability to edit a document on an Android phone and on a Linux PC is a good feature. While the size of a phone screen limits what can be done it does allow jotting down ideas and correcting mistakes. I previously wrote about using Google Keep on a phone for lecture notes [2]. It seems that the practical ability of Keep to edit notes on a PC is about limited to the notes for a 45 minute lecture. So while Keep works well for that task it won’t do well for anything bigger unless Google make some changes.

Google Docs is quite good for editing medium size documents on a phone if you use the Android app. Given the limitations of the device size and input capabilities it works really well. But it’s not much good for use on a PC.

I’ve seen a positive review of One Note from Microsoft [3]. But apart from the fact that it’s from Microsoft (with all the issues that involves) there’s the issue of requiring another account. Using an Android phone requires a Gmail account (in practice for almost all possible uses if not in theory) so there’s no need to get an extra account for Google Keep or Docs.

What would be ideal is an Android editor that could talk to a cloud service that I run (maybe using WebDAV) and which could use the same data as a Linux-X11 application.

Any suggestions?

Related posts:

  1. Desktop Equivalent Augmented Reality Augmented reality is available on all relatively modern smart phones....
  2. Linux on the Desktop I started using Linux in 1993. I initially used it...
  3. Lenny SE Linux on the Desktop I have been asked about the current status of Lenny...

Matthew Palmer: Witness the security of this fully DNSSEC-enabled zone!

6 July, 2014 - 13:00

After dealing with the client side of the DNSSEC puzzle last week, I thought it behooved me to also go about getting DNSSEC going on the domains I run DNS for. Like the resolver configuration, the server side work is straightforward enough once you know how, but boy howdy are there some landmines to be aware of.

One thing that made my job a little less ordinary is that I use and love tinydns. It’s an amazingly small and simple authoritative DNS server, strong in the Unix tradition of “do one thing and do it well”. Unfortunately, DNSSEC is anything but “small and simple” and so tinydns doesn’t support DNSSEC out of the box. However, Peter Conrad has produced a patch for tinydns to do DNSSEC, and that does the trick very nicely.

A brief aside about tinydns and DNSSEC, if I may… Poor key security is probably the single biggest compromise vector for crypto. So you want to keep your keys secure. A great way to keep keys secure is to not put them on machines that run public-facing network services (like DNS servers). So, you want to keep your keys away from your public DNS servers. A really great way of doing that would be to have all of your DNS records somewhere out of the way, and when they change regenerate the zone file, re-sign it, and push it out to all your DNS servers. That happens to be exactly how tinydns works. I happen to think that tinydns fits very nicely into a DNSSEC-enabled world. Anyway, back to the story.

Once I’d patched the tinydns source and built updated packages, it was time to start DNSSEC-enabling zones. This breaks down into a few simple steps:

  1. Generate a key for each zone. This will produce a private key (which, as the name suggests, you should keep to yourself), a public key in a DNSKEY DNS record, and a DS DNS record. More on those in a minute.

    One thing to be wary of, if you’re like me and don’t want or need separate “Key Signing” and “Zone Signing” keys. You must generate a “Key Signing” key – this is a key with a “flags” value of 257. Doing this wrong will result in all sorts of odd-ball problems. I wanted to just sign zones, so I generated a “Zone Signing” key, which has a “flags” value of 256. Big mistake.

    Also, the DS record is a hash of everything in the DNSKEY record, so don’t just think you can change the 256 to a 257 and everything will still work. It won’t.

  2. Add the key records to the zone data. For tinydns, this is just a matter of copying the zone records from the generated key into the zone file itself, and adding an extra pseudo record (it’s all covered in the tinydnssec howto).

  3. Publish the zone data. Reload your BIND config, run tinydns-sign and tinydns-data then rsync, or do whatever it is PowerDNS people do (kick the database until replication starts working again?).

  4. Test everything. I found the Verisign Labs DNSSEC Debugger to be very helpful. You want ticks everywhere except for where it’s looking for DS records for your zone in the higher-level zone. If there are any other freak-outs, you’ll want to fix those – because broken DNSSEC will take your domain off the Internet in no time.

  5. Tell the world about your DNSSEC keys. This is simply a matter of giving your DS record to your domain registrar, for them to add it to the zone data for your domain’s parent. Wherever you’d normally go to edit the nameservers or contact details for your domain, you probably want to do to the same place and look for something about “DS” or “Domain Signer” records. Copy and paste the details from the DS record in your zone into there, submit, and wait a minute or two for the records to get published.

  6. Test again. Before you pat yourself on the back, make sure you’ve got a full board of green ticks in the DNSSEC Debugger. if anything’s wrong, you want to rollback immediately, because broken DNSSEC means that anyone using a DNSSEC-enabled resolver just lost the ability to see your domain.

That’s it! There’s a lot of complicated crypto going on behind the scenes, and DNSSEC seems to revel in the number of acronyms and concepts that it introduces, but the actual execution of DNSSEC-enabling your domains is quite straightforward.

Maximilian Attems: xserver-xorg-video-intel 2.99.912+20140705 in experimental

6 July, 2014 - 07:00

Since the release of xf86-video-intel 2.99.912 a month ago several enhancements and fixes in xf86-video-intel git piled up. Again testing is very much appreciated: xserver-xorg-video-intel packages.

Mario Lang: I love my MacBookAir with Debian

5 July, 2014 - 16:25

In short: I love my MacBook Air. It is the best (laptop) hardware I ever owned. I have seen hardware which was much more flaky in the past. I can set the display backlight to zero via software, which saves me a lot of battery life and also offers a bit of anti-spy-acroos-my-shoulder support. WLAN and bluetooth work nicely.

And I just love the form-factor and the touch-feeling of the hardware. I even had the bag I use to carry my braille display modified so that the Air just fits in.

I can't say how it behaves with X11. Given how flaky accessibility with graphical desktops on Linux is, I have still not made the switch. My MacBookAir is my perfect mobile terminal, I LOVE it.

I am sort of surprised about the recent rant of Paul about MacBook Hardware. It is rather funny that we perceive the same technology so radically different.

And after reading the second part of his rant I am wondering if I am no longer allowed to consider myself part of the "hardcore F/OSS world", because I don't consider Apple as evil as apparently most others. Why? Well, first of all, I actually like the hardware. Secondly, you have to show me a vendor first that builds usable accessibility into their products, and I mean all their products, without any extra price-tag attached. Once the others start to consider people with disabilities, we can talk about apple-bashing again. But until then, sorry, you don't see the picture as I do.

Apple was the first big company on the market to take accessibility seriously. And they are still unbeaten, at least when it comes to bells and whistles included. I can unbox and configure any Apple product sold currently completely without assisstance. With some products, you just need to know a signle keypress (tripple-press the home button for touch devices and Cmd+F5 for Mac OS/X), and with others, during initial bootup, a speech synthesizer even tells you how to enable accessibility in case you need it.

And after that is enabled, I can perform the setup of the device completely on my own. I don't need help from anyone else. And after the setup is complete, I can use 95% of the functionality provided by the operating system.

And I am blind, part of a very small margin group so to speak.

In Debian circles, I have even heard the sentiment that we supposedly have to accept that small margin groups are ignored sometimes. Well, as long as we think that way, as long as we strictly think economically, we will never be able to go there, fully. And we will never be the universal operating system, actually. Sorry to say that, but I think there is some truth to it.

So, who is evil? Scratch your own itch doesn't always work to cover everything. How do we motivate contributors to work on things they don't personally need (yet)? How can we ensure that complicated but seldomly used features stay stable and do not fall to dust just because some upstream decides to rewrite an essential subcomponent of the dependency tree? I don't know. All I know is that these issues need to be solved in an universal operating system.

John Goerzen: The Heights of Coronado

5 July, 2014 - 12:28

Near the beautiful Swedish town of Lindsborg, Kansas, there stands a hill known as Coronado Heights. It lies in the midst of the Smoky Hills, named for the smoke-like mist that sometimes hangs in them. We Kansans smile our usual smile when we tell the story of how Francisco Vásquez de Coronado famously gave up his search for gold after reaching this point in Kansas.

Anyhow, it was just over a year ago that Laura, Jacob, Oliver, and I went to Coronado Heights at the start of summer, 2013 — our first full day together as a family.

Atop Coronado Heights sits a “castle”, an old WPA project from the 1930s:

The view from up there is pretty nice:

And, of course, Jacob and Oliver wanted to explore the grounds.

As exciting as the castle was, simple rocks and sand seemed to be just as entertaining.

After Coronado Heights, we went to a nearby lake for a picnic. After that, Jacob and Oliver wanted to play at the edge of the water. They loved to throw rocks in and observe the splash. Of course, it pretty soon descended (or, if you are a boy, “ascended”) into a game of “splash your brother.” And then to “splash Dad and Laura”.

Fun was had by all. What a wonderful day! Writing the story reminds me of a little while before that — the first time all four of us enjoyed dinner and smores at a fire by our creek.

Jacob and Oliver insisted on sitting — or, well, flopping — on Laura’s lap to eat. It made me smile.

(And yes, she is wearing a Debian hat.)

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้