Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 39 min ago

Vasudev Kamath: Note to Self: LVM Shrink Resize HowTo

5 October, 2014 - 14:30

Recently I had to reinstall a system at office with Debian Wheezy and I thought I should use this opportunity to experiment with LVM. Yeah I've not used LVM till date, even though I'm using Linux for more than 5 years now. I know many DD friends who use LVM with LUKS encryption and I always wanted to experiment, but since my laptop is only thing I've and its currently perfectly in shape I didn't dare to experiment it there. This reinstall was golden opportunity for me to experiment and learn something new.

I used Wheezy CD ISO downloaded using jigdo for installation. Now I will just go bit off topic and want to share the USB stick preparation. I have to say this because I had not done installation for quite a while now. Last I did was during Squeeze time so like usual I blindly executed following command.

cat debian-wheezy.iso > /dev/sdb

Surprisingly USB stick didn't boot! I was getting Corrupt or missing ISO.bin. So next I tried using dd for preparing.

dd if=debian-wheezy.iso of=/dev/sdb

Surprisingly this also didn't work and I get same error message as above. This is when I went back to debian manual and looked for installation step and there I found new way!

cp debian-wheezy.iso /dev/sdb

Look at destination, its a device and voilà this worked! This is something new I learnt and I'm surprised how easy it is now to prepare USB stick. But I still didn't get why first 2 methods failed!. If you guys know please do share.

Now coming back to LVM. I used default LVM when disk partitioning was asked, and I used guided partitioning method provided by debian-installer and ended up with following layout

$ lvs
  LV     VG        Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
home   system-disk -wi-ao-- 62.34g
root   system-disk -wi-ao--  9.31g
swap_1 system-disk -wi-ao--  2.64g

So guided partitioning of debian-installer allocates 10G for root and rest to home and swap. This is not a problem but when I started installing required software, I could see root running out of space quickly so I wanted to resize root and give it 10G more, for this I need to reduce the home by 10G for which I need to first unmount the home partition. Unmounting home from running system isn't possible so I booted into recovery assuming I can unmount home there but I couldn't. lsof didn't show any one using /home after searching a bit I found fuser command and it looks like kernel is using /home which is mounted by it.

$ fuser -vm /home
                     USER        PID ACCESS COMMAND
/home:               root     kernel mount /home

So it isn't possible to unmount /home in recovery mode also. Online materials told me to use live-cd for doing this but I didn't have patience to do that so I just went ahead commented /home mounting in /etc/fstab and rebooted!. This time it worked and /home is not mounted on recovery mode. Now comes the hard part resizing home, thanks to TLDP doc on reducing I coud do this with following step

# e2fsck -f /dev/volume-name/home
# resize2fs /dev/volume-name/home 52G
# lvreduce -L-10G /dev/volume-name/home

And now the next part live extending the root partition again thanks to TLDP doc on extending following command did it.

# lvextend -L+10G /dev/volume-name/root
# resize2fs /dev/volumne-name/root

And now important part! Uncomment /home line in /etc/fstab so it will be mounted normally in next boot and reboot! On login I can see my partitions updated.

# lvs
  LV     VG        Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
home   system-disk -wi-ao-- 52.34g
root   system-disk -wi-ao-- 19.31g
swap_1 system-disk -wi-ao--  2.64g

I've started liking LVM more now! :)

Jo Shields: The unstoppable march of mobile technology

5 October, 2014 - 00:23

It’s been more than 2 years since my last post about my smartphone. In the time after that post I upgraded my much loved Windows Phone 7 device to Windows Phone 8 (which I got rid of within months, for sucking), briefly used Firefox OS, then eventually used a Nexus 4 for at least a year.

After years of terrible service provision and pricing, I decided I would not stay with my network Orange a moment longer – and in getting a new contract, I would get a new phone too. So on Friday, I signed up to a new £15 per month contract with Three, including 200 minutes, unlimited data, and 25GB of data roaming in the USA and other countries (a saving of £200,000 per month versus Orange). Giffgaff is similarly competitive for data, but not roaming. No other network in the UK is competitive.

For the phone, I had a shortlist of three: Apple iPhone 6, Sony Xperia Z3 Compact, and Samsung Galaxy Alpha. These are all “small” phones by 2014 standards, with a screen about the same size as the Nexus 4. I didn’t consider any Windows Phone devices because they still haven’t shipped a functional music player app on Windows Phone 8. Other more “fringe” OSes weren’t considered, as I insist on trying out a real device in person before purchase, and no other comparable devices are testable on the high street.

iPhone 6

This was the weakest offering, for me. £120 more than the Samsung, and almost £200 more than the Sony, a much lower hardware specification, physically larger, less attractive, and worst of all – mandatory use of iTunes for Windows for music syncing.

Apple iPhone 6, press shot from apple.com, all rights reserved

The only real selling point for me would be for access to iPhone apps. And, I guess, decreased chance of mockery by co-workers.

Galaxy Alpha

Now on to the real choices. I’ve long felt that Samsung’s phones are ugly plasticy tat – the Galaxy S5 is popular, well-marketed, but looks and feels cheap compared to HTC’s unibody aluminium One. They’ve also committed the cardinal sin of gimping the specifications of their “mini” (normal-sized) phones, compared to the “normal” (gargantuan) versions. The newly released S5 Mini is about the same spec as early 2012’s S3, the S4 Mini was mostly an S2 internally, and so on.

However, whilst HTC have continued along these lines, Samsung have finally released a proper phone under 5″, in the Alpha.

Samsung Galaxy Alpha press shot from samsungmobile.com, all rights reserved

The Alpha combines a 4.7″ AMOLED screen, a plastic back, metal edges, 8-core big.LITTLE processor, and 2GB RAM. It is a PRETTY device – the screen really dazzles (as is the nature of OLED). It feels like a mix of design cues from an iPhone and Samsung’s own, keeping the angular feel of iPhone 4->5S rather than the curved edges on the iPhone 6.

The Galaxy Alpha was one of the two devices I seriously considered.

Xperia Z3 Compact

The other Android device I considered was the Compact version of Sony’s new Xperia Z3. Unlike other Android vendors, Sony decided that “mini” shouldn’t mean “low end” when they released the Z1 compact earlier this year. The Z3 follows suit, where the same CPU and storage are found on both the big and little versions.

Sony Xperia Z3 Compact press shot from Sony Xperia Picasa album. CC BY-NC-SA 3.0

The Z3C has a similar construction to the Nexus 4, with glass front and back, and plastic rim. The specification is similar to the Galaxy Alpha (with a quadcore 2.5GHz Qualcomm processor about 15% faster than the big.LITTLE Exynos in the Galaxy Alpha). It differs in a few places – LCD rather than AMOLED (bad); a non-removable (bad) 2600 mAh battery (good) compared to the removable 1860 mAh in the Samsung; waterproofing (good); A less hateful Android shell (Xperia on Android vs Samsung Touchwiz).

For those considering a Nexus-4-replacement class device (yes, rjek, that means you), both the Samsung and the Sony are worth a look. They both have good points and bad points. In the end, both need to be tested to form a proper opinion. But for me, the chunky battery and tasteful green were enough to swing it for the Sony. So let’s see where I stand in a few months’ time. Every phone I’ve owned, I’ve ended up hating it for one reason or another. My usual measure for whether a phone is good or not is how long it takes me to hit the “I can’t use this” limit. The Nokia N900 took me about 30 minutes, the Lumia 800 lasted months. How will the Z3 Compact do? Time will tell.

Petter Reinholdtsen: Ubuntu used to show the bread prizes at ICA Storo

4 October, 2014 - 21:20

Today I came across an unexpected Ubuntu boot screen. Above the bread shelf on the ICA shop at Storo in Oslo, the grub menu of Ubuntu with Linux kernel 3.2.0-23 (ie probably version 12.04 LTS) was stuck on a screen normally showing the bread types and prizes:

If it had booted as it was supposed to, I would never had known about this hidden Linux installation. It is interesting what errors can reveal.

Steve Kemp: Kraków was nice

4 October, 2014 - 20:20

We returned safely from Kraków, despite a somewhat turbulent flight home.

There were many pictures taken, but thus far I've only posted a random night-time shot. Perhaps more will appear in the future.

In other news I've just made a new release of the chronicle blog compiler, So 5.0.7 should shortly appear on CPAN.

The release contains a bunch of minor fixes, and some new facilities relating to templates.

It seems likely that in the future there will be the ability to create "static pages" along with the blog-entries, tag-clouds & etc. The suggestion was raised on the github issue tracker and as a proof of concept I hacked up a solution which works entirely via the chronicle plugin-system, proving that the new development work wasn't a waste of time - especially when combined with the significant speedups in the new codebase.

(ObRandom: Mailed the Debian package-mmaintainer to see if there was interest in changing. Also mailed a couple of people I know who are using the old code to see if they had comments on the new code, or had any compatibility issues. No replies from either, yet. *shrugs*)

Petter Reinholdtsen: New lsdvd release version 0.17 is ready

4 October, 2014 - 14:40

The lsdvd project got a new set of developers a few weeks ago, after the original developer decided to step down and pass the project to fresh blood. This project is now maintained by Petter Reinholdtsen and Steve Dibb.

I just wrapped up a new lsdvd release, available in git or from the download page. This is the changelog dated 2014-10-03 for version 0.17.

  • Ignore 'phantom' audio, subtitle tracks
  • Check for garbage in the program chains, which indicate that a track is non-existant, to work around additional copy protection
  • Fix displaying content type for audio tracks, subtitles
  • Fix pallete display of first entry
  • Fix include orders
  • Ignore read errors in titles that would not be displayed anyway
  • Fix the chapter count
  • Make sure the array size and the array limit used when initialising the palette size is the same.
  • Fix array printing.
  • Correct subsecond calculations.
  • Add sector information to the output format.
  • Clean up code to be closer to ANSI C and compile without warnings with more GCC compiler warnings.

This change bring together patches for lsdvd in use in various Linux and Unix distributions, as well as patches submitted to the project the last nine years. Please check it out. :)

Laura Arjona: I’ve applied to be a (non uploading) Debian Developer

4 October, 2014 - 08:05

I’ve just applied to be a (non uploading) Debian Developer. I’ve just filled in the form, and decrypted the message that I received to confirm my application (I had read the important documents long time ago, and again, some weeks ago, and again, some days ago).

I was expecting today to gather some GPG signs, but the event was cancelled (postponed). So beginning next week, I’ll try to gather GPG signs one by one, by myself.

Outdated translations of the website are finished (no more yellow stickers in the Spanish http://www.debian.org!), and I already began with the translation of new files.

I’ve sent mails to say thank you to some of the people that helped me during this phase of Debian Contributor.

I think I’ve done everything that I can do for now. So let’s wait.

I don’t know how will I sleep tonight.

Comments?

You can comment in this pump.io thread.


Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, encryption, English, Free Software, Moving into free software, translations

Ian Donnelly: Config::Model and Elektra

4 October, 2014 - 04:55

Hi Everybody,

Today I want to talk about the different approaches of Elektra and Config::Model. We have gotten a lot of questions lately about why Elektra is necessary and what differentiates it between similar tools like Config::Model. While there are a lot of similarities between Config::Model and Elektra, there are some key differences and that is what I will be focusing on in this post.

Once a specification is defined for Elektra and a plug-in is written to work with that specification, other developers will be able to reuse these specifications for programs that have similar configurations (such a specification and plug-in for the ini file type.) Additionally, specifications, once defined in KDB can be used across multiple programs. For instance, if I were to define a specification for my program within KDB:
[/myapp/file_dialog/show_hidden_files]
type=Boolean
default=true

Any other program could use my specification just by referring to show_hidden_files. These features allow Elektra to solve the problem of cross-platform configurations by providing a consistent API and also allow users to easily be aware of other applications’ configurations which allows for easier integration between programs.

Config:: Model also moves to provide a unified interface for configuration data and it also supports validation such as the type=Boolean like in the above example. The biggest differences between these two projects is that Elektra is intended for use by the programs themselves and by external GUIs and validation tools unlike Config::Model. Config::Model provides a tool allowing developers to provide a means for users to interactively edit configuration data in a safe way. Additionally, Elektra uses self-inscribing data. This means that all the specifications are saved within KDB and in metadata. More differences are that validators can be written in any language for Elektra because the specifications are just stored as data and they can enforce constraints on any access because plug-ins define the behavior of KDB itself.

Tying this all together with my GSoC project is the topic of three-way merges.  Config::Model actually does not rely on a base for merges since the specifications all must be complete. This is a very good approach to handle merges in an advanced way too. This is an avenue that Elektra would like to explore in the future when we have enough specifications to handle all types on configuration.

I hope that this post clarifies the different approaches of Elektra and Config::Model. While both of these tools offer a better answer to configuration files they do have different goals and implementations that make them unique. I want to mention that we have a good relationship with the developers of Config::Model, who supported my Google Summer of Code Project. We believe that both of these tools have their own place and uses and they do not compete to achieve the same goals.

For now,
Ian S. Donnelly

Gunnar Wolf: Obsoletion notice on githubredir

4 October, 2014 - 02:58

Back in 2009, I set up githubredir.debian.net, a service that allowed following using uscan the tags of a GitHub-based project.

Maybe a year or two later, GitHub added the needed bits in their interface, so it was no longer necessary to provide this service. Still, I kept it alive in order not to break things.

But as it is just a silly web scraper, every time something changes in GitHub, the redirector breaks. I decided today that, as it is no longer a very useful project, it should be retired.

So, in the not too distant future (I guess, next time anything breaks), I will remove it. Meanwhile, every page generated will display this:

(of course, with the corresponding project/author names in)

Consider yourselves informed.

AttachmentSize githubredir.png37.68 KB

Thorsten Glaser: mksh R50c released, security fix

4 October, 2014 - 02:12

The MirBSD Korn Shell has got a new security and maintenance release.

This release fixes one mksh(1)-specific issue when importing values from the environment. The issue has been detected by the main developer during careful code review, looking at whether the shell is affected by the recent “shellshock” bugs in GNU bash, many of which also affect AT&T ksh93. (The answer is: no, none of these bugs affects mksh.) Stephane Chanzelas kindly provided me with an in-depth look at how this can be exploited. The issue has not got a CVE identifier because it was identified as low-risk. The problem here is that the environment import filter mistakenly accepted variables named “FOO+” (for any FOO), which are, by general environ(7) syntax, distinct from “FOO”, and treated them as appending to the value of “FOO”. An attacker who already had access to the environment could so append values to parameters passed through programs (including sudo(8) or setuid) to shell scripts, including indirectly, after those programs intended to sanitise the environment, e.g. invalidating the last $PATH component. It could also be used to circumvent sudo’s environment filter which protected against the vulnerability of an unpatched GNU bash being exploited.

tl;dr: mksh not affected by any shellshock bugs, but we found a bug of our own, with low impact, which does not affect any other shell, during careful code review. Please do update to mksh R50c quickly.

Mike Hommey: No PIE for you!

4 October, 2014 - 00:00

You are a software vendor. You distribute software on multiple operating systems. Let’s say your software is a mildly popular internet browser. Let’s say its logo represents an animal and a globe.

Now, because you care about the security of your users, let’s say you would like the entire address space of your application to be randomized, including the main executable portion of it. That would be neat, wouldn’t it? And there’s even a feature for that: Position independent executables.

You get that working on (almost) all the operating systems you distribute software on. Great.

Then a Gnome user (or an Ubuntu user, for that matter) comes, and tells you they downloaded your software tarball, unpacked it, and tried opening your software, but all they get is a dialog telling them:

Could not display “application-name”
There is no application installed for “shared library” files

Because, you see, a Position independent executable, in ELF terms, is actually a (position independent) shared library that happens to be executable, instead of being an executable that happens to be position independent.

And nautilus (the file manager in Gnome and Ubuntu’s Unity) usefully knows to distinguish between executables and shared libraries. And will happily refuse to execute shared libraries, even when they have the file-system-level executable bit set.

You’d think you can get around this by using a .desktop file, but the Exec field in those files requires a full path. (No, ./ doesn’t work unless the executable is in the nautilus process current working directory, as in, the path nautilus was run from)

Dear lazyweb, please prove me wrong and tell me there’s a way around this.

Lars Wirzenius: Matthew Garret and Intel and the so-called gamergate

3 October, 2014 - 21:55

Kudos to Matthew for taking a stance. It has, not surprisingly, provoked a lot of comments and feedback, most of it unpleasant.

If I did anything that was directly related to Intel, I'd join him, but I do very, very little architecture dependent stuff anymore.

I will, however, say this: Even if the "gamergate" were actually about good journalism and ethics (and it's clear it isn't), if your reaction to a differing opinion is abuse, harrassment, and other kinds of psychological violence, you're not making anything better, you're making it all worse.

Reasonable people can handle disagreement without any kind of violence.

Marco d'Itri: 15 years of whois

3 October, 2014 - 13:32

Exactly 15 years ago I uploaded to Debian the first release of my whois client.

At the end of 1999 the United States Government forced Network Solutions, at the time the only registrar for the .com, .net and .org top level domains, to split their functions in a registry and a registrar and to and allow competing registrars to operate.

Since then, two whois queries are needed to access the data for a domain in a TLD operating with a thin registry model: first one to the registry to find out which registrar was used to register the domain, and then one the registrar to actually get the data.

Being as lazy as I am I tought that this was unacceptable, so I implemented a whois client that would know which whois server to query for all TLDs and then automatically follow the referrals to the registrars.

But the initial reason for writing this program was to replace the simplistic BSD-derived whois client that was shipped with Debian with one that would know which server to query for IP addresses and autonomous system numbers, a useful feature in a time when people still used to manually report all their spam to the originating ISPs.

Over the years I have spent countless hours searching for the right servers for the domains of far away countries (something that has often been incredibly instructive) and now the program database is usually more up to date than the official IANA one.

One of my goals for this program has always been wide portability, so I am happy that over the years it was adopted by other Linux distributions, made available by third parties to all common variants of UNIX and even to systems as alien as Windows and OS/2.

Now that whois is 15 years old I am happy to announce that I have recently achieved complete world domination and that all Linux distributions use it as their default whois client.

Jonathan Dowland: Gigabyte J1900N-D3V Mini-ITX mainboard

3 October, 2014 - 02:04

For my 31st birthday I decided to build myself a computer, specifically a NAS and backup server which could do some other bits and pieces. I ended up buying a system based on the Gigabyte J1900N-D3V SoC from Mini-ITX (who's after sales support is great, by the way).

I hope to write up a more comprehensive overview of what I've ended up with (probably in my rather dusty hardware section), but in the meantime I have a question for anyone else with this board:

If you've upgraded the BIOS, do the more recent BIOS versions insist on there being a display connected in order to boot?

Sadly the V1 BIOS version does, which seriously limits the utility of this board for my purposes. I did manage to flash the board up to V3, once, but it later decided to downgrade itself (believing the flashed BIOS to be corrupt). I haven't managed a second time. The EFI implementation in this board is... interesting. Convincing it to boot anything legacy is a tricky task.

As an aside, I recently stumbled across this suggestion on reddit to use an old-ish, Core-era Thinkpad T-series with a dock for this exact purpose: the spare ultrabay gives you two SATA drive slots; the laptop battery serves as a crude UPS and there's a built in keyboard and mouse, avoiding the issue I'm having with the J1900N-D3V. A Core i5 is more than fast enough for what I want to do and it will have vt. Hindsight is a wonderful thing...

Andrew Pollock: [opinion] On Islamaphobia

3 October, 2014 - 00:45

It's taken me a while to get sufficiently riled up about Australia's current Islamaphobia outbreak, but it's been brewing in me for a couple of weeks.

For the record, I'm an Atheist, but I'll defend your right to practise your religion, just don't go pushing it on me, thank you very much. I'm also not a huge fan of Islam, because it does seem to lend itself to more violent extremism than other religions, and ISIS/ISIL/IS (whatever you want to call them) aren't doing Islam any favours at the moment. I'm against extremism of any stripes though. The Westboro Baptists are Christian extremists. They just don't go around killing people. I'm also not a big fan of the burqa, but again, I'll defend a Muslim woman's right to choose to wear one. They key point here is choice.

I got my carpets cleaned yesterday by an ethnic couple. I like accents, and I was trying to pick theirs. I thought they may have been Turkish. It turned out they were Kurdish. Whenever I hear "Kurd" I habitually stick "Bosnian" in front of it after the Bosnian War that happened in my childhood. Turns out I wasn't listening properly, and that was actually "Serb". Now I feel dumb, but I digress.

I got chatting with the lady while her husband did the work. I got a refresher on where most Kurds are/were (Northern Iraq) and we talked about Sunni versus Shia Islam, and how they differed. I learned a bit yesterday, and I'll have to have a proper read of the Wikipedia article I just linked to, because I suspect I'll learn a lot more.

We briefly talked about burqas, and she said that because they were Sunni, they were given the choice, and they chose not to wear it. That's the sort of Islam that I support. I suspect a lot of the women running around in burqas don't get a lot of say in it, but I don't think banning it outright is the right solution to that. Those women need to feel empowered enough to be able to cast off their burqas if that's what they want to do.

I completely agree that a woman in a burqa entering a secure place (for example Parliament House) needs to be identifiable (assuming that identification is verified for all entrants to Parliament House). If it's not, and they're worried about security, that's what the metal detectors are for. I've been to Dubai. I've seen how they handle women in burqas at passport control. This is an easily solvable problem. You don't have to treat burqa-clad women as second class citizens and stick them in a glass box. Or exclude them entirely.

Matthew Garrett: Actions have consequences (or: why I'm not fixing Intel's bugs any more)

3 October, 2014 - 00:40
A lot of the kernel work I've ended up doing has involved dealing with bugs on Intel-based systems - figuring out interactions between their hardware and firmware, reverse engineering features that they refuse to document, improving their power management support, handling platform integration stuff for their GPUs and so on. Some of this I've been paid for, but a bunch has been unpaid work in my spare time[1].

Recently, as part of the anti-women #GamerGate campaign[2], a set of awful humans convinced Intel to terminate an advertising campaign because the site hosting the campaign had dared to suggest that the sexism present throughout the gaming industry might be a problem. Despite being awful humans, it is absolutely their right to request that a company choose to spend its money in a different way. And despite it being a dreadful decision, Intel is obviously entitled to spend their money as they wish. But I'm also free to spend my unpaid spare time as I wish, and I no longer wish to spend it doing unpaid work to enable an abhorrently-behaving company to sell more hardware. I won't be working on any Intel-specific bugs. I won't be reverse engineering any Intel-based features[3]. If the backlight on your laptop with an Intel GPU doesn't work, the number of fucks I'll be giving will fail to register on even the most sensitive measuring device.

On the plus side, this is probably going to significantly reduce my gin consumption.

[1] In the spirit of full disclosure: in some cases this has resulted in me being sent laptops in order to figure stuff out, and I was not always asked to return those laptops. My current laptop was purchased by me.

[2] I appreciate that there are some people involved in this campaign who earnestly believe that they are working to improve the state of professional ethics in games media. That is a worthy goal! But you're allying yourself to a cause that disproportionately attacks women while ignoring almost every other conflict of interest in the industry. If this is what you care about, find a new way to do it - and perhaps deal with the rather more obvious cases involving giant corporations, rather than obsessing over indie developers.

For avoidance of doubt, any comments arguing this point will be replaced with the phrase "Fart fart fart".

[3] Except for the purposes of finding entertaining security bugs

comments

Raphaël Hertzog: My Free Software Activities in September 2014

3 October, 2014 - 00:20

This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (26.6 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.

Django 1.7

Since Django 1.7 got released early September, I updated the package in experimental and continued to push for its inclusion in unstable. I sent a few more patches to multiple reverse build dependencies who had asked for help (python-django-bootstrap-form, horizon, lava-server) and then sent the package to unstable. At that time, I bumped the severity of all bug filed against packages that were no longer building with Django 1.7.

Later in the month, I made sure that the package migrated to testing, it only required a temporary removal of mumble-django (see #763087). Quite a few packages got updated since then (remaining bugs here).

Debian Long Term Support

I have worked towards keeping Debian Squeeze secure, see the dedicated article: My Debian LTS report for September 2014.

Distro Tracker

The pace of development on tracker.debian.org slowed down a bit this month, with only 30 new commits in the repository, closing 6 bugs. Some of the changes are noteworthy though: the news now contain true links on bugs, CVE and plain URLs (example here). I have also fixed a serious issue with the way users were identified when they used their Alioth account credentials to login via sso.debian.org.

On the development side, we’re now able to generate the test suite code coverage which is quite helpful to identify parts of the code that are clearly missing some tests (see bin/gen-coverage.sh in the repository).

Misc packaging

Publican. I have been behind packaging new upstream versions of Publican and with the freeze approaching, I decided to take care of it. Unfortunately, it wasn’t as easy as I had hoped and found numerous issues that I have filed upstream (invalid public identifier, PDF build fails with noNumberLines function available, build of the manual requires the network). Most of those have been fixed upstream in the mean time but the last issue seems to be a problem in the way we manage our Docbook XML catalogs in Debian. I have thus filed #763598 (docbook-xml: xmllint fails to identify local copy of docbook entities file) which is still waiting an answer from the maintainer.

Package sponsorship. I have sponsored new uploads of dolibarr (RC bug fix), tcpdf (RC bug fix), tryton-server (security update) and django-ratelimit.

GNOME 3.14. With the arrival of GNOME 3.14 in unstable, I took care of updating gnome-shell-timer and also filed some tickets for extensions that I use: https://github.com/projecthamster/shell-extension/issues/79 and https://github.com/olebowle/gnome-shell-timer/issues/25

git-buildpackage. I filed multiple bugs on git-buildpackage for little issues that have been irking me since I started using this tool: #761160 (gbp pq export/switch should be smarter), #761161 (gbp pq import+export should preserve patch filenames), #761641 (gbp import-orig should be less fragile and more idempotent).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Julian Andres Klode: Acer Chromebook 13 (FHD): Initial impressions

3 October, 2014 - 00:10

Today, I received my Acer Chromebook 13, in the glorious FullHD variant with 4GB RAM. For those of you who don’t know it, the Acer Chromebook 13 is a 13.3 inch chromebook powered by a Tegra K1 cpu.

This version cannot be ordered currently, only pre-orders were shipped yesterday (at least here in Germany). I cannot even review it on Amazon (despite having it bought there), as they have not enabled reviews for it yet.

The device feels solidly built, and looks good. It comes in all-white matte plastic and is slightly reminiscent of the old white MacBooks. The keyboard is horrible, there’s no well defined pressure point. It feels like your typing on a pillow. The display is OK, an IPS would be a lot nicer to work with, though. Oh, and it could be brighter. I do not think that using it outside on a sunny day would be a good idea. The speakers are loud and clear compared to my ThinkPad X230.

The performance of the device is about acceptable (unfortunately, I do not have any comparison in this device class). Even when typing this blog post in the visual wordpress editor, I notice some sluggishness. Opening the app launcher or loading the new tab page while music is playing makes the music stop for or skip a few ms (20-50ms if I had to guess). Running a benchmark in parallel or browsing does not usually cause this stuttering, though.

There are still some bugs in Chrome OS:  Loading the Play Books library the first time resulted in some rendering issues. The “Browser” process always consumes at least 10% CPU, even when idling, with no page open; this might cause some of the sluggishness I mentioned above. Also watching Flash videos used more CPU than I expected given that it is hardware accelerated.

Finally, Netflix did not work out of the box, despite the Chromebook shipping with a special Netflix plugin. I always get some unexpected issue-type page. Setting the user agent to Chrome 38 from Windows, thus forcing the use of the EME video player instead of the Netflix plugin, makes it work.

I reported these software issues to Google via Alt+Shift+I. The issues appeared on the current version of the stable channel, 37.0.2062.120.

What’s next? I don’t know.


Filed under: Uncategorized

Sha Liu: Back again

2 October, 2014 - 20:50

I cannot believe I’ll ever be here again.


Joachim Breitner: 11 ways to write your last Haskell program

2 October, 2014 - 20:00

At my university, we recently held an exam that covered a bit of Haskell, and a simple warm-up question at the beginning asked the students to implement last :: [a] -> a. We did not demand a specific behaviour for last [].

This is a survey of various solutions, only covering those that are actually correct. I elided some variation in syntax (e.g. guards vs. if-then-else).

Most wrote the naive and straightforward code:

last [x] = x
last (x:xs) = last xs

Then quite a few seemed to be uncomfortable with pattern-matching and used conditional expressions. There was some variety in finding out whether a list is empty:

last (x:xs)
  | null xs == True = x
  | otherwise       = last xs

last (x:xs)
  | length (x:xs) == 1 = x
  | otherwise          = last xs

last (x:xs)
  | length xs == 0 = x
  | otherwise      = last xs

last xs
  | lenght xs > 1 = last (tail xs)
  | otherwise     = head xs

last xs
  | lenght xs == 1 = head xs
  | otherwise      = last (tail xs)

last (x:xs)
  | xs == []  = x
  | otherwise = last xs

The last one is not really correct, as it has the stricter type Eq a => [a] -> a. Also we did not expect our students to avoid the quadratic runtime caused by using length in every step.

The next class of answers used length to pick out the right elemet, either using (!!) directly, or simulating it with head and drop:

last xs = xs !! (length xs - 1)

last xs = head (drop (length xs - 1) xs)

There were two submissions that spelled out an explicit left folding recursion:

last (x:xs) = lastHelper x xs
  where
    lastHelper z [] = z
    lastHelper z (y:ys) = lastHelper y ys

And finally there are a few code-golfers that just plugged together some other functions:

last x = head (reverse x)

Quite a lot of ways to write last!

Michal Čihař: Merging Weblate instances

2 October, 2014 - 18:00

For quite some time, I've been running translation server for projects where I am involved at l10n.cihar.com. Historically this used Pootle, but when we had more and more problems with that, I've written Weblate and started to use it there.

As Weblate become more popular and I got requests to help people with running it, I've realized that it might be good idea to run server where I could host translations for other projects. This is when Hosted Weblate was born.

After some time, I've realized that it really makes little sense to run and maintain separate servers for these sets of projects, so I've decided to move all translations from l10n.cihar.com to hosted.weblate.org. Today this move was completed by moving translations for phpMyAdmin.

Filed under: English phpMyAdmin Weblate | 0 comments | Flattr this!

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้