Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 50 min ago

Mike Hommey: SSH through jump hosts, revisited

8 February, 2016 - 06:26

Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc.

So here is an updated rule, version 2016:

Host *+*
ProxyCommand ssh -W $(echo %h | sed 's/^.*+//') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:

  • With one jump host:
    $ ssh login1%host1:port1+host2:port2 -l login2
  • With two jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
  • With three jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
  • etc.

Logins and ports can be omitted.

Iain R. Learmonth: After FOSDEM 2016

8 February, 2016 - 05:55

FOSDEM was fun. It was great to see all these open source projects coming together in one place and it was really good to talk to people that were just as enthusiastic about the FOSS activities they do as I am about mine.

Thanks go to Saúl Corretgé who looked after the real-time communications dev room and made sure everything ran smoothly. I was very pleased to find that I had to stand for a couple of talks as the room was full with people eager to learn more about the world of RTC.

I was again pleased on the Sunday when I had such a great audience for my talk in the distributions dev room. Everyone was very welcoming and after the talk I had some corridor discussions with a few people that were really interesting and have given me a few new things to explore in the near future.

A few highlights from FOSDEM:

  • ReactOS: Since I last looked at this project it has really matured and is getting to be rather stable. It may be possible to start seriously considering replacing Windows XP/Vista machines with ReactOS where the applications being run just cannot be used with later versions of Windows.
  • Haiku: I used BeOS a long long time ago on my video/music PC. I can't say that I was using it over a Linux or BSD distribution for any particular reason but it worked well. I saw a talk that discussed how Haiku was keeping up-to-date with drivers and also there was a talk, that I didn't see, that talked about the new Haiku package management system. I think I may check out Haiku again in the near future, even if only for the sake of nostalgia.
  • Kolab: Continuing with the theme of things that have matured since I last looked at them, I visited the Kolab stand at FOSDEM and I was impressed with how far it has come. In fact, I was so impressed that I'm looking at using it for my primary email and calendaring in the near future.
  • picoTCP: When I did my Honours project at University, I was playing with Contiki. This looks a lot easier to get started with, even if it's perhaps missing parts of the stack that Contiki implements well. If I ever find time for doing some IoT hacking, this will be on the list of things to try out first.

This is just some of the highlights, and I know I'm missing out a lot here. One of the main things that FOSDEM has done for me is open my eyes as to how wide and diverse our community is and it has served as a reminder that there is tons of cool stuff out there if you take a moment to look around.

Also, thanks to my trip to FOSDEM, I now have four new t-shirts to add into the rotation: FOSDEM 2016, Debian, XMPP and twiki.org.

Joey Hess: letsencrypt support in propellor

8 February, 2016 - 05:10

I've integrated letsencrypt into propellor today.

I'm using the reference letsencrypt client. While I've seen complaints that it has a lot of dependencies and is too complicated, it seemed to only need to pull in a few packages, and use only a few megabytes of disk space, and it has fewer options than ls does. So seems fine. (Although it would be nice to have some alternatives packaged in Debian.)

I ended up implementing this:

letsEncrypt :: AgreeTOS -> Domain -> WebRoot -> CertInstaller -> Property NoInfo

The interesting part of that is the CertInstaller, which is passed the certificate files that letsencrypt generates, and is responsible for making the web server (or whatever) use them.

This avoids relying on the letsencrypt client's apache config munging, which is probably useful for many people, but not those of us using configuration management systems. And so avoids most of the complicated magic that the letsencrypt client has a reputation for.

And, this API lets other propellor properties integrate with letsencrypt by providing a CertInstaller of their own. Like this property, which sets up apache to serve a https website, using letsencrypt to get the certificate:

Apache.httpsVirtualHost "example.com" "/var/www"
    (LetsEncrypt.AgreeTos (Just "me@my.domain"))

That's about as simple a configuration as I can imagine for such a website!

The two parts of letsencrypt that are complicated are not the fault of the client really. Those are renewal and rate limiting.

I'm currently rate limited for the next week because I asked letsencrypt for several certificates for a domain, as I was learning how to use it and integrating it into propellor. So I've not quite managed to fully test everything. That's annoying. I also worry that rate limiting could hit at an inopportune time once I'm relying on letsencrypt. It's especially problimatic that it only allows 5 certs for subdomains of a given domain per week. What if I use a lot of subdomains?

Renewal is complicated mostly because there's no good way to test it. You set up your cron job, or whatever, and wait three months, and hopefully it worked. Just as likely, you got something wrong, and your website breaks. Maybe letsencrypt could offer certificates that will only last an hour, or a day, for use when testing renewal.

Also, what if something goes wrong with renewal? Perhaps letsencrypt.org is not available when your certificate needs to be renewed.

What I've done in propellor to handle renewal is, it runs letsencrypt every time, with the --keep-until-expiring option. If this fails, propellor will report a failure. As long as propellor is run periodically by a cron job, this should result in multiple failure reports being sent (for 30 days I think) before a cert expires without getting renewed. But, I have not been able to test this.

Iustin Pop: mt-st project new homepage

8 February, 2016 - 03:34

A short public notice: mt-st project new homepage at https://github.com/iustin/mt-st. Feel free to forward your distribution-specific patches for upstream integration!

Context: a while back I bought a tape unit to help me with backups. Yay, tape! All good, except that I later found out that the Debian package was orphaned, so I took over the maintenance.

All good once more, but there were a number of patches in the Debian package that were not Debian-specific, but rather valid for upstream. And there was no actual upstream project homepage, as this was quite an old project, with no (visible) recent activity; the canonical place for the project source code was an ftp site (ibiblio.org). I spoke with Kai Mäkisara, the original author, and he agreed to let me take over the maintenance of the project (and that's what I intend to do: maintenance mostly, merging of patches, etc. but not significant work). So now there's a github project for it.

There was no VCS history for the project, so I did my best to partially recreate the history: I took the debian releases from snapshots.debian.org and used the .orig.tar.gz as bulk import; the versions 0.7, 0.8, 0.9b and 1.1 have separate commits in the tree.

I also took the Debian and Fedora patches and applied them, and with a few other cleanups, I've just published the 1.2 release. I'll update the Debian packaging soon as well.

So, if you somehow read this and are the maintainer of mt-st in another distribution, feel free to send patches my way for integration; I know this might be late, as some distributions have dropped it (e.g. Arch Linux).

Ben Armstrong: Bluff Trail icy dawn: Winter 2016

7 February, 2016 - 21:45

Before the rest of the family was up, I took a brief excursion to explore the first kilometre of the Bluff Trail and check out conditions. I turned at the ridge, satisfied I had seen enough to give an idea of what it’s like out there, and then walked back the four kilometres home on the BLT Trail.

I saw three joggers and their three dogs just before I exited the Bluff Trail on the way back, and later, two young men on the BLT with day packs approaching. The parking lot had gained two more cars for a total of three as I headed home. Exercising appropriate caution and judgement, the first loop is beautiful and rewarding, and I’m not alone in feeling the draw of its delights this crisp morning.

Click the first photo below to start the slideshow.

Click to start the slideshow At the parking lot, some ice, but passable with caution Trail head: a few mm of sleet Many footprints since last snowfall Thin ice encrusts the bog The boardwalk offers some loose traction Mental note: buy crampons More thin bog ice Bubbles captured in the bog ice Shelves hang above receding water First challenging boulder ascent Rewarding view at the crest Time to turn back here Flowing runnels alongside BLT Trail Home soon to fix breakfast If it looks like a tripod, it is Not a very adjustable tripod, however Pretty, encrusted pool The sun peeks out briefly Light creeps down the rock face Shimmering icy droplets and feathery moss Capped with a light dusting of sleet

Steve Kemp: Redesigning my clustered website

7 February, 2016 - 17:28

I'm slowly planning the redesign of the cluster which powers the Debian Administration website.

Currently the design is simple, and looks like this:

In brief there is a load-balancer that handles SSL-termination and then proxies to one of four Apache servers. These talk back and forth to a MySQL database. Nothing too shocking, or unusual.

(In truth there are two database servers, and rather than a single installation of HAProxy it runs upon each of the webservers - One is the master which is handled via ucarp. Logically though traffic routes through HAProxy to a number of Apache instances. I can lose half of the servers and things still keep running.)

When I setup the site it all ran on one host, it was simpler, it was less highly available. It also struggled to cope with the load.

Half the reason for writing/hosting the site in the first place was to document learning experiences though, so when it came to time to make it scale I figured why not learn something and do it neatly? Having it run on cheap and reliable virtual hosts was a good excuse to bump the server-count and the design has been stable for the past few years.

Recently though I've begun planning how it will be deployed in the future and I have a new design:

Rather than having the Apache instances talk to the database I'll indirect through an API-server. The API server will handle requests like these:

  • POST /users/login
    • POST a username/password and return 200 if valid. If bogus details return 403. If the user doesn't exist return 404.
  • GET /users/Steve
    • Return a JSON hash of user-information.
    • Return 404 on invalid user.

I expect to have four API handler endpoints: /articles, /comments, /users & /weblogs. Again we'll use a floating IP and a HAProxy instance to route to multiple API-servers. Each of which will use local caching to cache articles, etc.

This should turn the middle layer, running on Apache, into simpler things, and increase throughput. I suspect, but haven't confirmed, that making a single HTTP-request to fetch a (formatted) article body will be cheaper than making N-database queries.

Anyway that's what I'm slowly pondering and working on at the moment. I wrote a proof of concept API-server based CMS two years ago, and my recollection of that time is that it was fast to develop, and easy to scale.

Dimitri John Ledkov: Blogging about Let's encrypt over HTTP

7 February, 2016 - 06:30
So let's encrypt thing started. And it can do challenges over http (serving text files) and over dns (serving .txt records).

My "infrastructure" is fairly modest. I've seen too many of my email accounts getting swamped with spam, and or companies going bust. So I got my own domain name surgut.co.uk. However, I don't have money or time to run my own services. So I've signed up for the Google Apps account for my domain to do email, blogging, etc.

Then later i got the libnih.la domain to host API docs for the mentioned library. In the world of .io startups, I thought it's an incredibly funny domain name.

But I also have a VPS to host static files on ad-hoc basis, run VPN, and an irc bouncer. My irc bouncer is ZNC and I used a self-signed certificate there, thus i had "ignore" ssl errors in all of my irc clients... which kind of defeats the purposes somewhat.

I run my VPS on i386 (to save on memory usage) and on Ubuntu 14.04 LTS managed with Landscape. And my little services are just configured by hand there (not using juju).

My first attempt at getting on the let's encrypt bandwagon was to use the official client. By fetching debs from xenial, and installing that on LTS. But the package/script there is huge, has support for things I don't need, and wants dependencies I don't have on 14.04 LTS.

However I found a minimalist implementation letsencrypt.sh implemented in shell, with openssl and curl. It was trivial to get dependencies for and configure. Specified a domains text file, and that was it. And well, added sym links in my NGINX config to serve the challenges directory & a hook to deploy certificate to znc and restart that. I've added a cronjob to renew the certs too. Thinking about it, it's not complete as I'm not sure if NGINX will pick up certificate change and/or if it will need to be reloaded. I shall test that, once my cert expires.

Tweaking config for NGINX was easy. And I was like, let's see how good it is. I pointed https://www.ssllabs.com/ssltest/ at my https://x4d.surgut.co.uk/ and I got a "C" rating. No forward secrecy, vulnerable to down grade attacks, BEAST, POODLE and stuff like that. I went googling for all types of NGINX configs and eventually found website with "best known practices" https://cipherli.st/ However, even that only got me to "B" rating, as it still has Diffie-Hellman things that ssltest caps at "B" rating. So I disabled those too. I've ended up with this gibberish:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+AESGCM:AES256+EECDH";
ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
ssl_session_cache shared:SSL:10m;
#ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
#resolver $DNS-IP-1 $DNS-IP-2 valid=300s;
#resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;

I call it gibberish, because IMHO, I shouldn't need to specify any of the above... Anyway I got my A+ rating.
However, security is as best as the weakest link. I'm still serving things over HTTP, maybe I should disable that. And I'm yet to check how "good" the TLS is on my znc. Or if I need to further harden my sshd configuration.
This has filled a big gap in my infrastructure. However a few things remain served over HTTP only.
http://blog.surgut.co.uk is hosted by an Alphabet's / Google's Blogger service. Which I would want to be served over HTTPS.
http://libnih.la is hosted by GitHub Inc service. Which I would want to be served over HTTPS.
I do not want to manage those services, experience load / spammers / DDoS attacks etc. But I am happy to sign CSRs with let's encrypt and deploy certs over to those companies. Or allow them to self-obtain certificates from let's encrypt on my behalf. I used gandi.net as my domain names provider, which offers an RPC API to manage domains and their zones files, thus e.g. I can also generate an API token for those companies to respond with a dns-01 challenge from let's encrypt.
One step at a time I guess.
The postings on this site are my own and don't necessarily represent any past/present/future employers' positions, strategies, or opinions.

Andrew Shadura: Community time at Collabora

7 February, 2016 - 00:22

I haven’t yet blogged about this (as normally I don’t blog often), but I joined Collabora in June last year. Since then, I had an opportunity to work with OpenEmbedded again, write a kernel patch, learn lots of things about systemd (in particular, how to stop worrying about it taking over the world and so on), and do lots of other things.

As one would expect when working for a free software consultancy, our customers do understand the value of the community and contributing back to it, and so does the customer for the project I’m working on. In fact, our customer insists we keep the number of locally applied patches to, for example, Linux kernel, to minimum, submitting as much as possible upstream.

However, apart from the upstreaming work which may be done for the customer, Collabora encourages us, the engineers, to spend up to two hours weekly for upstreaming on top of what customers need, and up to five days yearly as paid Community days. These community days may be spent working on the code or doing volunteering at free software events or even speaking at conferences.

Even though on this project I have already been paid for contributing to the free software project which I maintained in my free time previously (ifupdown), paid community time is a great opportunity to contribute to the projects I’m interested in, and if the projects I’m interested in coincide with the projects I’m working with, I effectively can spend even more time on them.

A bit unfortunately for me, I haven’t spent enough time last year to plan my community days, so I used most of them in the last weeks of the calendar year, and I used them on something that benefitted both free software community and Collabora. I’m talking about SparkleShare, a cross-platform Git-based file synchronisation solution written in C#. SparkleShare provides an easy to use interface for Git, or, actually, it makes it possible to not use any Git interface at all, as it monitors the working directory using inotify and commits stuff right after it changes. It automatically handles conflicts even for binary files, even though I have to admit its handling could still be improved.

At Collabora, we use SparkleShare to store all sorts of internal documents, and it’s being used by users not familiar with command line interfaces too. Unfortunately, the version we recently had in Debian had a couple of very annoying bugs, making it a great pain to use it: it would not notice edits in local files, or not notice new commits being pushed to the server, and that led to individual users’ edits being lost sometimes. Not cool, especially when the document has to be sent to the customer in a couple of minutes.

The new versions, 1.4 (and recently released 1.5) was reported as being much better and also fixing some crashes, but it also used GTK+ 3 and some libraries not yet packaged for Debian. Thanh Tung Nguyen packaged these packages (and a newer SparkleShare) for Ubuntu and published them in his PPA, but they required some work to be fit for Debian.

I have never touched Mono packages before in my life, so I had to learn a lot. Some time was spent talking to upstream about fixing their copyright statements (they had none in the code, and only one author was mentioned in configure.ac, and nowhere else in the source), a bit more time went into adjusting and updating the patches to the current source code version. Then, of course, waiting the packages to go through NEW. Fixing parallel build issues, waiting for buildds to all build dependencies for at least one architecture… But then, finally, on 19th of January I had the updated SparkleShare in Debian.

As you may have already guessed, this blog post has been sponsored by Collabora, the first of my employers to encourage me to work on free software in my paid time :)

Neil Williams: lava.debian.net

6 February, 2016 - 21:33

With thanks to Iain Learmonth for the hardware, there is now a Debian instance of LAVA available for use and the Debian wiki page has been updated.

LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing. Extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

LAVA has a long history of supporting continuous integration of the Linux kernel on ARM devices (ARMv7 & ARMv8). So if you are testing a Linux kernel image on armhf or arm64 devices, you will find a lot of similar tests already running on the other LAVA instances. The Debian LAVA instance seeks to widen that testing in a couple of ways:

  • A wider range of tests including use of Debian artifacts as well as mainline Linux builds
  • A wider range of devices by allowing developers to offer devices for testing from their own desks.
  • Letting developers share local test results easily with the community without losing the benefits of having the board on your desk.

This instance relies on the latest changes in lava-server and lava-dispatcher. The 2016.2 release has now deprecated the old, complex dispatcher and a whole new pipeline design is available. The Debian LAVA instance is running 2015.12 at the moment, I’ll upgrade to 2016.2 once the packages migrate into testing in a few days and I can do a backport to jessie.

What can LAVA do for Debian? ARMMP kernel testing

Unreleased builds, experimental initramfs testing – this is the core of what LAVA is already doing behind the scenes of sites like http://kernelci.org/.

U-Boot ARM testing

This is what fully automated LAVA labs have not been able to deliver in the past, at least without a usable SD Mux

What’s next

LOTS. This post actually got published early (distracted by the rugby) – I’ll update things more in a later post. Contact me if you want to get involved, I’ll provide more information on how to use the instance and how to contribute to the testing in due course.

Julien Danjou: FOSDEM 2016, recap

6 February, 2016 - 17:30

Last week-end, I was in Brussels, Belgium for the FOSDEM, one of the greatest open source developer conference. I was not sure to go there this year (I already skipped it in 2015), but it turned out I was requested to do a talk in the shared Lua & GNU Guile devroom.

As a long time Lua user and developer, and a follower of GNU Guile for several years, the organizer asked me to run a talk that would be a link between the two languages.

I've entitled my talk "How awesome ended up with Lua and not Guile" and gave it to a room full of interested users of the awesome window manager 🙂.

We continued with a panel discussion entitled "The future of small languages Experience of Lua and Guile" composed of Andy Wingo, Christopher Webber, Ludovic Courtès, Etiene Dalcol, Hisham Muhammaad and myself. It was a pretty interesting discussion, where both language shared their views on the state of their languages.

It was a bit awkward to talk about Lua & Guile whereas most of my knowledge was year old, but it turns out many things didn't change. I hope I was able to provide interesting hindsight to both community. Finally, it was a pretty interesting FOSDEM to me, and it was a long time I didn't give talk here, so I really enjoyed it. See you next year!

Hideki Yamane: playing to update package (failed)

6 February, 2016 - 10:51

I thought to build gnome-todo package 3.19 branch.

Once tried to do that, it seems to need gtk+3.0 (>= 3.19.5), however Debian doesn't have it yet (of course, it's development branch). Then tried to build gtk+3, it needs wayland 1.90 that has not been in Debian yet, too. So, update local package to wayland 1.91, found tiny bug and sent patch, and build it (package diff was sent to maintainer - and merged), easy task.

Build again, gtk+3.0 needs "wayland-protocols" that has not been packaged in Debian, yet. Okay... (20 min work...) done! Make wayland-protocols package (not ITPed yet since who should be maintainer, under same umbrella as wayland?), not difficult.

Build newest gtk+3.0 source as 3.19.8 with cowbuilder chroot with those package (cowbuilder --login --save-after-exec --inputfile foo.deb --inputfile bar.deb), ...and failed with testsuite ;) I don't have enough knowledge to investigate it.

Back to older gtk+3.0 source, build 3.19.1 is fine (diff), but 3.19.2 was failed to build, 3.19.3 to 3.19.8 were failed with testsuite.


Time is up, "You lose!"... that's one of typical days.

Daniel Pocock: Giving up democracy to get it back

6 February, 2016 - 05:07

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world.

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way round: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Bernd Zeimetz: bzed-letsencrypt puppet module

6 February, 2016 - 02:55

With the announcement of the Let’s Encrypt dns-01 challenge support we finally had a way to retrieve certificates for those hosts where http challenges won’t work. Also it allows to centralize the signing procedure to avoid the installation and maintenance of letsencrypt clients on all hosts.

For an implementation I had the following requirements in my mind: * Handling of key/csr generation and certificate signing by puppet. * Private keys don’t leave the host they were generated on. If they need to (for HA setups and similar cases), handling needs to be done outside of the letsencrypt puppet module. * Deployment and cleanup of tokens in our DNS infrastructure should be easy to implement and maintain.

After reading trough the source code of various letsencrypt client implementations I decided to use letsencrypt.sh. Mainly because its dependencies are available pretty much everywhere and adding the necessary hook is as simple as writing some lines of code in your favourite (scripting) language. My second favourite was lego, but I wanted to avoid shipping binaries with puppet, so golang was not an option.

It took me some days to find enough spare time to write the necessary puppet code, but finally I managed to release a working module today. It is still not perfect, but the basic tasks are implemented and the whole key/csr/signing chain works pretty well.

And if your hook can handle it, http-01 challenges are possible, too!

Please give the module a try and send patches if you would like to help to improve it!

Jose M. Calhariz: Preview of amanda 3.3.8-1

6 February, 2016 - 02:49

While I sort out a sponsor, my sponsor is very busy, here is a preview of the new packages. So anyone can install and test them on jessie.

The source of the packages is in collab-maint.The debs files for jessie are here:

amanda-common_3.3.8-1_cal0_i386.deb

amanda-server_3.3.8-1_cal0_i386.deb

amanda-client_3.3.8-1_cal0_i386.deb

Here comes the changelog:

amanda (1:3.3.8-1~cal0) unstable; urgency=low

  * New Upstream version
    * Changes for 3.3.8
      * s3 devices
          New NEARLINE S3-STORAGE-CLASS for Google storage.
          New AWS4 STORAGE-API
      * amcryptsimple
          Works with newer gpg2.
      * amgtar
          Default SPARSE value is NO if tar < 1.28.
          Because a bug in tar with some filesystem.
      * amstar
          support include in backup mode.
      * ampgsql
          Add FULL-WAL property.
      * Many bugs fix.
    * Changes for 3.3.7p1
      * Fix build in 3.3.7
    * Changes for 3.3.7
      * amvault
          new --no-interactivity argument.
          new --src-labelstr argument.
      * amdump
          compute crc32 of the streams and write them to the debug files.
      * chg-robot
          Add a BROKEN-DRIVE-LOADED-SLOT property.
      * Many bugs fix.
  * Refreshed patches.
  * Dropped patches that were applied by the upstream: fix-misc-typos,
    automake-add-missing, fix-amcheck-M.patch,
    fix-device-src_rait-device.c, fix-amreport-perl_Amanda_Report_human.pm
  * Change the email of the maintainer.
  * "wrap-and-sort -at" all control files.
  * swig is a new build depend.
  * Bump standard version to 3.9.6, no changes needed.
  * Replace deprecated dependency perl5 by perl, (Closes: #808209), thank
    you Gregor Herrmann for the NMU.

 -- Jose M Calhariz <jose@calhariz.com>  Tue, 02 Feb 2016 19:56:12 +0000

Ben Hutchings: Debian LTS work, January 2016

6 February, 2016 - 02:32

In January I carried over 10 hours from December and was assigned another 15 hours of work by Freexian's Debian LTS initiative. I worked a total of 15 hours. I had a few days on 'front desk' at the start of the month, as my week in that role spanned the new year.

I fixed a regression in the kernel that was introduced to all stable suites in December. I uploaded this along with some minor security fixes, and issued DLA 378-1.

I finished backporting and testing fixes to sudo for CVE-2015-5602. I uploaded an update and issued DLA 382-1, which was followed by DSA 3440-1 for wheezy and jessie.

I finished backporting and testing fixes to Claws Mail for CVE-2015-8614 and CVE-2015-8708. I uploaded an update and issued DLA 383-1. This was followed by DSA 3452-1 for wheezy and jessie, although the issues are less serious there.

I also apent a little time on InspIRCd, though this isn't a package that Freexian's customers care about and it seems to have been broken in squeeze for several years due to a latent bug in the build system. I had already backported the security fix by the time I discovered this, so I went ahead with an update fixing that regression as well, and issued DLA 384-1.

Finally, I diagnosed the regression in the update to isc-dhcp in DLA 385-1.

Enrico Zini: debtags-cleanup

6 February, 2016 - 01:18
debtags.debian.org cleaned up

Since the Debtags consolidation announcement there are some more news:

No more anonymous submissions
  • I have disabled anonymous tagging. Anyone is still able to tag via Debian Single Sign-On. SSO-enabling the site was as simple as this.
  • Tags need no review anymore to be sent to ftp-master. I have removed all the distinction in the code between reviwed and unreviewed tags, and all the code for the tag review interface.
  • The site now has an audit log for each user, that any person logged in via SSO can access via the "history" link in the top right of the tag editor page.
Official recognition as Debian Contributors
  • Tag contributions are sent to contributors.debian.org. There is no historical data for them because all submissions until now have been anonymous, but from now on if you tag packages you are finally recognised as a Debian Contributor!
Mailing lists closed
  • I closed the debtags-devel and debtags-commits mailing lists; the archives are still online.
  • I have updated the workflow for suggesting new tags in the FAQ to "submit a bug to debtags and Cc debian-devel"

We can just use debian-devel instead of debtags-devel.

Autotagging of trivial packages
  • I have introduced the concept of "trivial" packages to currently be any package in the libs, oldlibs and debug sections. They are tagged automatically by the site maintenance and are excluded from the site todo lists and tag editor. We do not need to bother about trivial packages anymore, all 13239 of them.
Miscellaneous other changes
  • I have moved the debtags vocabulary from subversion to git
  • I have renamed the tag used to mark packages not yet reviewed by humans from special::not-yet-tagged to special::unreviewed
  • At the end of every nightly maintenance, some statistics are saved into a database table. I have collected 10 years of historical data by crunching big tarballs of site backups, and fed them to the historical stats table.
  • The workflow for getting tags from the site to ftp-master is now far, far simpler. It is almost simple enough that I should manage to explain it without needing to dig through code to see what it is actually doing.

Michal &#268;iha&#345;: Bug squashing in Gammu

5 February, 2016 - 18:00

I've not really spent much time on Gammu in past months and it was about time to do some basic housekeeping.

It's not that there would be too much of new development, I rather wanted to go through the issue tracker, properly tag issues, close questions without response and resolve the ones which are simple to fix. This lead to few code and documentation improvements.

Overall the list of closed issues is quite huge:

Do you want more development to happen on Gammu? You can support it by money.

Filed under: English Gammu python-gammu Wammu | 0 comments

Vincent Fourmond: Making oprofile work again with recent kernels

5 February, 2016 - 03:54
I've been using oprofile for profiling programs for a while now (and especially QSoas, because it doesn't require specific compilation options, and doesn't make your program run much more slowly (like valgrind does, which can also be used to some extent for profiling). It's a pity the Debian package was dropped long ago, but the ubuntu packages work out of the box on Debian. But, today, while trying to see what takes so long in some fits I'm running, here's what I get:
~ operf QSoas
Unexpected error running operf: Permission denied

Looking further using strace, I could see that what was not working was the first call to perf_event_open.
It took me quite a long time to understand why it stopped working and how to get it working again, so here's for those of you who googled the error and couldn't find any answer (including me, who will probably have forgotten the anwser in a couple of months). The reason behing the change is that, for security reason, non-privileged users do not have the necessary privileges since Debian kernel 4.1.3-1; here's the relevant bit from the changelog:

  * security: Apply and enable GRKERNSEC_PERF_HARDEN feature from Grsecurity,
    disabling use of perf_event_open() by unprivileged users by default
    (sysctl: kernel.perf_event_paranoid)

The solution is simple, just run as root:
~ sysctl kernel.perf_event_paranoid=1

(the default value seems to be 3, for now). Hope it helps !

Petter Reinholdtsen: Using appstream in Debian to locate packages with firmware and mime type support

4 February, 2016 - 22:40

The appstream system is taking shape in Debian, and one provided feature is a very convenient way to tell you which package to install to make a given firmware file available when the kernel is looking for it. This can be done using apt-file too, but that is for someone else to blog about. :)

Here is a small recipe to find the package with a given firmware file, in this example I am looking for ctfw-3.2.3.0.bin, randomly picked from the set of firmware announced using appstream in Debian unstable. In general you would be looking for the firmware requested by the kernel during kernel module loading. To find the package providing the example file, do like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides firmware:runtime ctfw-3.2.3.0.bin | \
  awk '/Package:/ {print $2}'
firmware-qlogic
%

See the appstream wiki page to learn how to embed the package metadata in a way appstream can use.

This same approach can be used to find any package supporting a given MIME type. This is very useful when you get a file you do not know how to handle. First find the mime type using file --mime-type, and next look up the package providing support for it. Lets say you got an SVG file. Its MIME type is image/svg+xml, and you can find all packages handling this type like this:

% apt install appstream
[...]
% apt update
[...]
% appstreamcli what-provides mimetype image/svg+xml | \
  awk '/Package:/ {print $2}'
bkchem
phototonic
inkscape
shutter
tetzle
geeqie
xia
pinta
gthumb
karbon
comix
mirage
viewnior
postr
ristretto
kolourpaint4
eog
eom
gimagereader
midori
%

I believe the MIME types are fetched from the desktop file for packages providing appstream metadata.

Ritesh Raj Sarraf: Lenovo Yoga 2 13 running Debian with GNOME Converged Interface

4 February, 2016 - 22:33

I've wanted to blog about this for a while. So, though I'm terrible at creating video reviews, I'm still going to do it, rather than procrastinate every day.

 

In this video, the emphasis is on using Free Software (GNOME in particular) tools, with which soon you should be able serve the needs for Desktop/Laptop, and as well as a Tablet.

The video also touches a bit on Touchpad Gestures.

 

Categories: Keywords: Like: 

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้