Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 59 min ago

Enrico Zini: Jessie live on UEFI systems

30 May, 2017 - 01:12

According to the Debian Wiki, you can't boot a Debian Live based on Jessie on a UEFI system:

UEFI support in live images At this point, UEFI support exists only in Debian's installation images. The accompanying live images do not have support for UEFI boot, as the live-build software used to generate them still does not include it. Hopefully the debian-live developers will add this important feature soon.

Some people really needed it, though, so I kept looking.

Here's a script that takes a Jessie Debian Live .iso file and the device name for a USB pendrive, and gives you a pendrive that boots on UEFI:

#!/bin/sh
# License: do what you want but it's not my fault, I told you not to.

sh -ue

ISO=${1:?"Usage: $0 file.iso usbdev"}
DEV=${2:?"Usage: $0 file.iso usbdev"}

parted -s $DEV mklabel gpt mkpart primary fat32 1 100%
mkfs.vfat ${DEV}1
mount ${DEV}1 /mnt

bsdtar -C /mnt -xf $ISO

mkdir -p /mnt/efi/boot
# Shell.efi comes from https://svn.code.sf.net/p/edk2/code/trunk/edk2/ShellBinPkg/UefiShell/X64/
cp Shell.efi /mnt/efi/boot/Bootx64.efi
echo 'live\vmlinuz initrd=live\initrd.img append boot=live components' > /mnt/startup.nsh

umount /mnt

Only use it if you really need it, though: Stretch will support this out of the box, and it's coming soon.

Clint Adams: Dots and dashes

29 May, 2017 - 23:57

The Canadians were on LSD. The sex-toy bag was full of booze and Olivia lost her buttplug and we had to search the kitchen drawers before she juiced the grapefruits.

Posted on 2017-05-29 Tags: mintings

Russ Allbery: Debian Policy 4.0.0.0

29 May, 2017 - 05:16

Today, about a month later than I had intended due to having three consecutive work weeks that mostly drained me of energy, I finally uploaded Debian Policy 4.0.0.0 to Debian experimental.

This went to experimental rather than unstable for two reasons:

  • Policy 4.0.0.0 is not targeted for Debian stretch. We're getting ready for the start of the buster development cycle. Please don't prioritize updating your packages until stretch is released. Please don't include Standards-Version changes in your uploads targeted for stretch.

  • This release finally converts Policy from DebianDoc-SGML to DocBook. Many thanks to Osamu Aoki and Guillem Jover for their hard work on this conversion. This is a significant change with downstream impact on all the rendered versions. There will be bugs and formatting problems, and there will be unexpected impact on downstream consumers of Policy. It will take a bit to sort those out.

I expect there to be a few more point-release changes to packaging and formatting uploaded to experimental before uploading to unstable for the start of the buster development cycle. (I've indeed already noticed about six minor bugs, including the missing release date in the upgrading checklist....)

Due to the DocBook conversion, and the resources rightly devoted to the stretch release instead, it may be a bit before the new Policy version shows up properly in all the places it's published.

As you might expect from it having been more than a year since the previous release, there were a lot of accumulated changes. I posted the full upgrading-checklist entries to debian-devel-announce, or of course you can install the debian-policy package from experimental and review them in /usr/share/doc/debian-policy/upgrading-checklist.txt.gz.

Antonio Terceiro: Debian CI: new data retention policy

29 May, 2017 - 04:20

When I started debci for Debian CI, I went for the simplest thing that could possibly work. One of the design decisions was to use the filesystem directly for file storagem. A large part of the Debian CI data is log files and test artifacts (which are just files), and using the filesystem directly for storage makes it a lot easier to handle it. The rest of the data which is structured (test history and status of packages) is stored as JSON files.

Another nice benefit of using the filesystem like this is that I get a sort of REST API for free by just explosing the file storage to the web. For example, getting the latest test status of debci itself on unstable/amd64 is as easy as:

$ curl https://ci.debian.net/data/packages/unstable/amd64/d/debci/latest.json
{
  "run_id": "20170528_173652",
  "package": "debci",
  "version": "1.5.1",
  "date": "2017-05-28 17:43:05",
  "status": "pass",
  "blame": [],
  "previous_status": "pass",
  "duration_seconds": "373",
  "duration_human": "0h 6m 13s",
  "message": "Tests passed, but at least one test skipped",
  "last_pass_version": "1.5.1",
  "last_pass_date": "2017-05-28 17:43:05"
}

Now, nothing is life is without compromises. One big disadvantage of the way debci stored its data is that there were a lot of files, which ends up using a large number of inodes in the filesystem. The current Debian CI master has more than 10 million inodes in its filesystem, and almost all of them were being used. This is clearly unsustainable.

You will notice that I said stored, because as of version 1.6, debci now implements a data retention policy: log files and test artifacts will now only be kept for a configurable amount of days (default: 180).

So there you have it: effective immeditately, Debian CI will not provide logs and test artifacts older than 180 days.

If you are reporting bugs based on logs from Debian CI, please don’t hotlink the log files. Instead, make sure you download the logs in question and attach them to the bug report, because in 6 months they will be gone.

Russ Allbery: On time management

28 May, 2017 - 06:48

Last December, the Guardian published a long essay by Oliver Burkeman entitled "Why time management is ruining our lives". Those who follow my book reviews know I read a lot of time management books, so of course I couldn't resist this. And, possibly surprisingly, not to disagree with it. It's an excellent essay, and well worth your time.

Burkeman starts by talking about Inbox Zero:

If all this fervour seems extreme – Inbox Zero was just a set of technical instructions for handling email, after all – this was because email had become far more than a technical problem. It functioned as a kind of infinite to-do list, to which anyone on the planet could add anything at will.

This is, as Burkeman develops in the essay, an important critique of time management techniques in general, not just Inbox Zero: perhaps you can become moderately more efficient, but what are you becoming more efficient at doing, and why does it matter? If there were a finite amount of things that you had to accomplish, with leisure the reward at the end of the fixed task list, doing those things more efficiently makes perfect sense. But this is not the case in most modern life. Instead, we live in a world governed by Parkinson's Law: "Work expands to fill the time available for its completion."

Worse, we live in a world where the typical employer takes Parkinson's Law, not as a statement on the nature of ever-expanding to-do lists, but a challenge to compress the time made available for a task to try to force the work to happen faster. Burkeman goes farther into the politics, pointing out that a cui bono analysis of time management suggests that we're all being played by capitalist employers. I wholeheartedly agree, but that's worth a separate discussion; for those who want to explore that angle, David Graeber's Debt and John Kenneth Galbraith's The Affluent Society are worth your time.

What I want to write about here is why I still read (and recommend) time management literature, and how my thinking on it has changed.

I started in the same place that most people probably do: I had a bunch of work to juggle, I felt I was making insufficient forward progress on it, and I felt my day contained a lot of slack that could be put to better use. The alluring promise of time management is that these problems can be resolved with more organization and some focus techniques. And there is a huge surge of energy that comes with adopting a new system and watching it work, since the good ones build psychological payoff into the tracking mechanism. Starting a new time management system is fun! Finishing things is fun!

I then ran into the same problem that I think most people do: after that initial surge of enthusiasm, I had lists, systems, techniques, data on where my time was going, and a far more organized intake process. But I didn't feel more comfortable with how I was spending my time, I didn't have more leisure time, and I didn't feel happier. Often the opposite: time management systems will often force you to notice all the things you want to do and how slow your progress is towards accomplishing any of them.

This is my fundamental disagreement with Getting Things Done (GTD): David Allen firmly believes that the act of recording everything that is nagging at you to be done relieves the brain of draining background processing loops and frees you to be more productive. He argues for this quite persuasively; as you can see from my review, I liked his book a great deal, and used his system for some time. But, at least for me, this does not work. Instead, having a complete list of goals towards which I am making slow or no progress is profoundly discouraging and depressing. The process of maintaining and dwelling on that list while watching it constantly grow was awful, quite a bit worse psychologically than having no time management system at all.

Mark Forster is the time management author who speaks the best to me, and one of the points he makes is that time management is the wrong framing. You're not going to somehow generate more time, and you're usually not managing minutes and seconds. A better framing is task management, or commitment management: the goal of the system is to manage what you mentally commit to accomplishing, usually by restricting that list to something far shorter than you would come up with otherwise. How, in other words, to limit your focus to a small enough set of goals that you can make meaningful progress instead of thrashing.

That, for me, is now the merit and appeal of time (or task) management systems: how do I sort through all the incoming noise, distractions, requests, desires, and compelling ideas that life throws at me and figure out which of them are worth investing time in? I also benefit from structuring that process for my peculiar psychology, in which backlogs I have to look at regularly are actively dangerous for my mental well-being. Left unchecked, I can turn even the most enjoyable hobby into an obligation and then into a source of guilt for not meeting the (entirely artificial) terms of the obligation I created, without even intending to.

And here I think it has a purpose, but it's not the purpose that the time management industry is selling. If you think of time management as a way to get more things done and get more out of each moment, you're going to be disappointed (and you're probably also being taken advantage of by the people who benefit from unsustainable effort without real, unstructured leisure time). I practice Inbox Zero, but the point wasn't to be more efficient at processing my email. The point was to avoid the (for me) psychologically damaging backlog of messages while acting on the knowledge that 99% of email should go immediately into the trash with no further action. Email is an endless incoming stream of potential obligations or requests for my time (even just to read a longer message) that I should normallly reject. I also take the time to notice patterns of email that I never care about and then shut off the source or write filters to delete that email for me. I can then reserve my email time for moments of human connection, directly relevant information, or very interesting projects, and spend the time on those messages without guilt (or at least much less guilt) about ignoring everything else.

Prioritization is extremely difficult, particularly once you realize that true prioritization is not about first and later, but about soon or never. The point of prioritization is not to choose what to do first, it's to choose the 5% of things that you going to do at all, convince yourself to be mentally okay with never doing the other 95% (and not lying to yourself about how there will be some future point when you'll magically have more time), and vigorously defend your focus and effort for that 5%. And, hopefully, wholeheartedly enjoy working on those things, without guilt or nagging that there's something else you should be doing instead.

I still fail at this all the time. But I'm better than I used to be.

For me, that mental shift was by far the hardest part. But once you've made that shift, I do think the time management world has a lot of tools and techniques to help you make more informed choices about the 5%, and to help you overcome procrastination and loss of focus on your real goals.

Those real goals should include true unstructured leisure and "because I want to" projects. And hopefully, if you're in a financial position to do it, include working less on what other people want you to do and more on the things that delight you. Or at least making a well-informed strategic choice (for the sake of money or some other concrete and constantly re-evaluated reason) to sacrifice your personal goals for some temporary external ones.

Russ Allbery: Optimistic haul

28 May, 2017 - 01:09

I never have as much time to read as I wish I did, but I keep buying books, of course. Maybe someday I'll have a good opportunity to take extended time off work and just read for a bit. Well, retirement, at least, right?

Charlie Jane Anders — All the Birds in the Sky (sff)
Peter C. Brown, et al. — Make It Stick (nonfiction)
April Daniels — Dreadnought: Nemesis (sff)
T. Kingfisher — The Halcyon Fairy Book (sff collection)
T. Kingfisher — Jackalope Wives and Other Stories (sff collection)
Margot Lee Shetterly — Hidden Figures (nonfiction)
Cordwainer Smith — Norstrilia (sff)
Kristine Smith — Code of Conduct (sff)
Jonathan Taplin — Move Fast and Break Things (nonfiction)
Sarah Zettel — Fool's War (sff)
Sarah Zettel — Playing God (sff)
Sarah Zettel — The Quiet Invasion (sff)

It doesn't help that James Nicoll keeps creating new lists of books that all sound great. And there's some really interesting nonfiction being written right now.

Make It Stick is the current book for the work book club.

Lars Wirzenius: Distix movement

27 May, 2017 - 15:28

Distix is my distributed ticketing system. I initially wrote the core of it as a bit of programming performance art, to celebrate my 30 years as a programmer. Distix is built on top of git and emails in Maildirs. It is a silent listener to your issue and bug discussions: as long as you ensure it gets a copy of each mail, it takes care of automatically arranging things into separate tickets based on email threading. Users and customers do not need to even know Distix is being used. Only the "support staff" need ever interact with Distix, and they mostly only need to close tickets that have been dealt with.

I've been using Distix for my own stuff for some time now, and recently we've started using it at work. I slowly improve it as we find problems.

It's not a sleek, smooth, finished tool. It's clunky, weird, and probably not what you want. But it's what I want.

Changes in recent months:

  • There is a new website: http://distix.eu/. No particular good reason for a new website, but I won the domain for free a couple of years ago, so I might as well use it.

  • In addition, a ticketing system for Distix itself: http://tickets.distix.eu/. Possibly I should've called the subdomain dogfood, but I'm a serious person, not prone to trying to be funny.

  • Mails can now be imported using IMAP.

  • Importing has been optimized for speed and memory use, making my own production use more practical.

I've discussed with a friend the possibility of writing a web UI, and some day maybe that will happen. For now, distix is a command line applicaton that can generate a static HTML site.

Steinar H. Gunderson: Last minute stretch bugs

26 May, 2017 - 19:00

The last week, I found no less than three pet bugs I have hope that will be allowed to go in before stretch release:

  • #863286: lua-http: completely broken in non-US locales
  • #843448: linux-image-4.8.0-1-armmp-lpae: fails to boot on Odroid-Xu4 with rootfs on USB (actually my problem is that the NIC doesn't work, but same root cause—this makes stretch basically unusable on XU4)
  • #863280: cubemap: streams with paths exactly 7 characters long get broken buffering behavior

I promise, none of these were found late because I upgraded to stretch too late—just a perfect storm. :-)

Michal Čihař: Running Bitcoin node on Turris Omnia

26 May, 2017 - 17:00

For quite some I'm happy user of Turris Omnia router. The router has quite good hardware, so I've decided to try if I can run Bitcoin node on that and ElectrumX server.

To make the things easier to manage, I've decided to use LXC and run all these in separate container. First of all you need LXC on the router. This is the default setup, but in case you've removed it, you can add it back in the Updater settings.

Now we will create Debian container. There is basic information on using in Turris Documentation on how to create the container, in latter documentation I assume it is called debian.

It's also good idea to enable LXC autostart, to do so add your container to cat /etc/config/lxc-auto on :

config container
    option name debian

You might also want to edit lxc container configration to enable clean shutdown:

# Send SIGRTMIN+3 to shutdown systemd
lxc.haltsignal = 37

To make the system more recent, I've decided to use Debian Stretch (one of reasons was that ElectrumX needs Python 3.5.3 or newer). Which is anyway probably sane choice right now given that it's already frozen and will be soon stable. As Stretch is not available as a download option in Omnia, I've chosen to use Debian Jessie and upgrate it later:

$ lxc-attach  --name debian
$ sed -i s/jessie/stretch/ /etc/apt/sources.list
$ apt update
$ apt full-upgrade

Now you have up to date system and we can start installing dependencies. First thing to install is Bitcoin Core. Just follow the instructions on their website to do that. Now it's time to set it up and wait for downloading full blockchain:

$ adduser bitcoin
$ su - bitcoin
$ bitcoind -daemon

Depending on your connection speed, the download will take few hours. You can monitor the progress using bitcoin-cli, you're waiting for 450k blocks:

$ bitcoin-cli getinfo
{
  "version": 140000,
  "protocolversion": 70015,
  "walletversion": 130000,
  "balance": 0.00000000,
  "blocks": 301242,
  "timeoffset": -1,
  "connections": 8,
  "proxy": "",
  "difficulty": 8853416309.1278,
  "testnet": false,
  "keypoololdest": 1490267950,
  "keypoolsize": 100,
  "paytxfee": 0.00000000,
  "relayfee": 0.00001000,
  "errors": ""
}

Depending how much memory you have (mine has 2G) and what all you run on the router, you will have to tweak bitcoind configuration to consume less memory. This can be done by editing .bitcoin/bitcoin.conf, I've ended up with following settings:

par=1
dbcache=150
maxmempool=150

You can also create startup unit for Bitcoin daemon (place that as /etc/systemd/system/bitcoind.service):

[Unit]
Description=Bitcoind
After=network.target

[Service]
ExecStart=/opt/bitcoin/bin/bitcoind
User=bitcoin
TimeoutStopSec=30min
Restart=on-failure
RestartSec=30

[Install]
WantedBy=multi-user.target

Now we can enable services to start on container start:

systemctl enable bitcoind.service

Then I wanted to setup ElectrumX as well, but I've quickly realized that it uses way more memory that my router has, so there is no option to run it without using swap, what will probably make it quite slow (I haven't tried that).

Filed under: Debian English OpenWrt

Michael Prokop: The #newinstretch game: dbgsym packages in Debian/stretch

26 May, 2017 - 16:37

Debug packages include debug symbols and so far were usually named <package>-dbg in Debian. Those packages are essential if you’ve to debug failing (especially: crashing) programs. Since December 2015 Debian has automatic dbgsym packages, being built by default. Those packages are available as <package>-dbgsym, so starting with Debian/stretch you should no longer look for -dbg packages but for -dbgsym instead. Currently there are 13.369 dbgsym packages available for the amd64 architecture of Debian/stretch, comparing this to the 2.250 packages which I counted being available for Debian/jessie this is really a huge improvement. (If you’re interested in the details of dbgsym packages as a package maintainer take a look at the Automatic Debug Packages page in the Debian wiki.)

The dbgsym packages are NOT provided by the usual Debian archive though (which is good thing, since those packages are quite disk space consuming, e.g. just the amd64 stretch mirror of debian-debug consumes 47GB). Instead there’s a new archive called debian-debug. To get access to the dbgsym packages via the debian-debug suite on your Debian/stretch system include the following entry in your apt’s sources.list configuration (replace deb.debian.org with whatever mirror you prefer):

deb http://deb.debian.org/debian-debug/ stretch-debug main

If you’re not yet familiar with usage of such debug packages let me give you a short demo.

Let’s start with sending SIGILL (Illegal Instruction) to a running sha256sum process, causing it to generate a so called core dump file:

% sha256sum /dev/urandom &
[1] 1126
% kill -4 1126
% 
[1]+  Illegal instruction     (core dumped) sha256sum /dev/urandom
% ls
core
$ file core
core: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from 'sha256sum /dev/urandom', real uid: 1000, effective uid: 1000, real gid: 1000, effective gid: 1000, execfn: '/usr/bin/sha256sum', platform: 'x86_64'

Now we can run the GNU Debugger (gdb) on this core file, executing:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...(no debugging symbols found)...done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in ?? ()
(gdb) bt
#0  0x000055fe9aab63db in ?? ()
#1  0x000055fe9aab8606 in ?? ()
#2  0x000055fe9aab4e5b in ?? ()
#3  0x000055fe9aab42ea in ?? ()
#4  0x00007faec30872b1 in __libc_start_main (main=0x55fe9aab3ae0, argc=2, argv=0x7ffc512951f8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc512951e8) at ../csu/libc-start.c:291
#5  0x000055fe9aab4b5a in ?? ()
(gdb) 

As you can see by the several “??” question marks, the “bt” command (short for backtrace) doesn’t provide useful information.
So let’s install the according debug package, which is coreutils-dbgsym in this case (since the sha256sum binary which generated the core file is part of the coreutils package). Then let’s rerun the same gdb steps:

% gdb sha256sum core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526     lib/sha256.c: No such file or directory.
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036

As you can see it’s reading the debug symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug and this is what we were looking for.
gdb now also tells us that we don’t have lib/sha256.c available. For even better debugging it’s useful to have the according source code available. This is also just an `apt-get source coreutils ; cd coreutils-8.26/` away:

~/coreutils-8.26 % gdb sha256sum ~/core
[...]
Type "apropos word" to search for commands related to "word"...
Reading symbols from sha256sum...Reading symbols from /usr/lib/debug/.build-id/a4/b946ef7c161f2d215518ca38d3f0300bcbdbb7.debug...done.
done.
[New LWP 1126]
Core was generated by `sha256sum /dev/urandom'.
Program terminated with signal SIGILL, Illegal instruction.
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
526           R( h, a, b, c, d, e, f, g, K(25), M(25) );
(gdb) bt
#0  0x000055fe9aab63db in sha256_process_block (buffer=buffer@entry=0x55fe9be95290, len=len@entry=32768, ctx=ctx@entry=0x7ffc51294eb0) at lib/sha256.c:526
#1  0x000055fe9aab8606 in sha256_stream (stream=0x55fe9be95060, resblock=0x7ffc51295080) at lib/sha256.c:230
#2  0x000055fe9aab4e5b in digest_file (filename=0x7ffc51295f3a "/dev/urandom", bin_result=0x7ffc51295080 "\001", missing=0x7ffc51295078, binary=<optimized out>) at src/md5sum.c:624
#3  0x000055fe9aab42ea in main (argc=<optimized out>, argv=<optimized out>) at src/md5sum.c:1036
(gdb) 

Now we’re ready for all the debugging magic. :)

Thanks to everyone who was involved in getting us the automatic dbgsym package builds in Debian!

Michael Prokop: The #newinstretch game: new forensic packages in Debian/stretch

25 May, 2017 - 14:48

Repeating what I did for the last Debian releases with the #newinwheezy and #newinjessie games it’s time for the #newinstretch game:

Debian/stretch AKA Debian 9.0 will include a bunch of packages for people interested in digital forensics. The packages maintained within the Debian Forensics team which are new in the Debian/stretch release as compared to Debian/jessie (and ignoring jessie-backports):

  • bruteforce-salted-openssl: try to find the passphrase for files encrypted with OpenSSL
  • cewl: custom word list generator
  • dfdatetime/python-dfdatetime: Digital Forensics date and time library
  • dfvfs/python-dfvfs: Digital Forensics Virtual File System
  • dfwinreg: Digital Forensics Windows Registry library
  • dislocker: read/write encrypted BitLocker volumes
  • forensics-all: Debian Forensics Environment – essential components (metapackage)
  • forensics-colorize: show differences between files using color graphics
  • forensics-extra: Forensics Environment – extra console components (metapackage)
  • hashdeep: recursively compute hashsums or piecewise hashings
  • hashrat: hashing tool supporting several hashes and recursivity
  • libesedb(-utils): Extensible Storage Engine DB access library
  • libevt(-utils): Windows Event Log (EVT) format access library
  • libevtx(-utils): Windows XML Event Log format access library
  • libfsntfs(-utils): NTFS access library
  • libfvde(-utils): FileVault Drive Encryption access library
  • libfwnt: Windows NT data type library
  • libfwsi: Windows Shell Item format access library
  • liblnk(-utils): Windows Shortcut File format access library
  • libmsiecf(-utils): Microsoft Internet Explorer Cache File access library
  • libolecf(-utils): OLE2 Compound File format access library
  • libqcow(-utils): QEMU Copy-On-Write image format access library
  • libregf(-utils): Windows NT Registry File (REGF) format access library
  • libscca(-utils): Windows Prefetch File access library
  • libsigscan(-utils): binary signature scanning library
  • libsmdev(-utils): storage media device access library
  • libsmraw(-utils): split RAW image format access library
  • libvhdi(-utils): Virtual Hard Disk image format access library
  • libvmdk(-utils): VMWare Virtual Disk format access library
  • libvshadow(-utils): Volume Shadow Snapshot format access library
  • libvslvm(-utils): Linux LVM volume system format access librar
  • plaso: super timeline all the things
  • pompem: Exploit and Vulnerability Finder
  • pytsk/python-tsk: Python Bindings for The Sleuth Kit
  • rekall(-core): memory analysis and incident response framework
  • unhide.rb: Forensic tool to find processes hidden by rootkits (was already present in wheezy but missing in jessie, available via jessie-backports though)
  • winregfs: Windows registry FUSE filesystem

Join the #newinstretch game and present packages and features which are new in Debian/stretch.

Jaldhar Vyas: For Downtown Hoboken

25 May, 2017 - 11:34

Q: What should you do if you see a spaceman?

A: Park there before someone takes it, man.

Steve Kemp: Getting ready for Stretch

25 May, 2017 - 04:00

I run about 17 servers. Of those about six are very personal and the rest are a small cluster which are used for a single website. (Partly because the code is old and in some ways a bit badly designed, partly because "clustering!", "high availability!", "learning!", "fun!" - seriously I had a lot of fun putting together a fault-tolerant deployment with haproxy, ucarp, etc, etc. If I were paying for it the site would be both retired and static!)

I've started the process of upgrading to stretch by picking a bunch of hosts that do things I could live without for a few days - in case there were big problems, or I needed to restore from backups.

So far I've upgraded:

  • master.steve
    • This is a puppet-master, so while it is important killing it wouldn't be too bad - after all my nodes are currently setup properly, right?
    • Upgrading this host changed the puppet-server from 3.x to 4.x.
    • That meant I had to upgrade all my client-systems, because puppet 3.x won't talk to a 4.x master.
    • Happily jessie-backports contains a recent puppet-client.
    • It also meant I had to rework a lot of my recipes, in small ways.
  • builder.steve
    • This is a host I use to build packages upon, via pbuilder.
    • I have chroots setup for wheezy, jessie, and stretch, each in i386 and amd64 flavours.
  • git.steve
    • This is a host which stores my git-repositories, via gitbucket.
    • While it is an important host in terms of functionality, the software it needs is very basic: nginx proxies to a java application which runs on localhost:XXXX, with some caching magic happening to deal with abusive clients.
    • I do keep considering using gitlab, because I like its runners, etc. But that is pretty resource intensive.
    • On the other hand If I did switch I could drop my builder.steve host, which might mean I'd come out ahead in terms of used resources.
  • leave.steve
    • Torrent-box.
    • Upgrading was painless, I only run rtorrent, and a simple object storage system of my own devising.

All upgrades were painless, with only one real surprise - the attic-backup software was removed from Debian.

Although I do intend to retry using Larss' excellent obnum in the near future pragmatically I wanted to stick with what I'm familiar with. Borg backup is a fork of attic I've been aware of for a long time, but I never quite had a reason to try it out. Setting it up pretty much just meant editing my backup-script:

s/attic/borg/g

Once I did that, and created some new destinations all was good:

borg@rsync.io ~ $ borg init /backups/git.steve.org.uk.borg/
borg@rsync.io ~ $ borg init /backups/master.steve.org.uk.borg/
borg@rsync.io ~ $ ..

Upgrading other hosts, for example my website(s), and my email-box, will be more complex and fiddly. On that basis they will definitely wait for the formal stretch release.

But having a couple of hosts running the frozen distribution is good for testing, and to let me see what is new.

Jonathan Dowland: yakking

24 May, 2017 - 20:07

I've written a guest post for the Yakking Blog — "A WadC successor in Haskell?. It's mainly on the topic of Haskell with WadC as a use-case for a thought experiment.

Yakking is a collaborative blog geared towards beginner software engineers that is put together by some friends of mine. I was talking to them about contributing a blog post on a completely different topic a while ago, but that has not come to fruition (there or anywhere, yet). When I wrote up the notes that formed the basis of this blog post, I realised it might be a good fit.

Take a look at some of their other posts, and if you find it interesting, subscribe!

Michal &#268;iha&#345;: Weblate 2.14.1

24 May, 2017 - 15:00

Weblate 2.14.1 has been released today. It is bugfix release fixing possible migration issues, search results navigation and some minor security issues.

Full list of changes:

  • Fixed possible error when paginating search results.
  • Fixed migrations from older versions in some corner cases.
  • Fixed possible CSRF on project watch and unwatch.
  • The password reset no longer authenticates user.
  • Fixed possible captcha bypass on forgotten password.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Dirk Eddelbuettel: Rcpp 0.12.11: Loads of goodies

24 May, 2017 - 02:41

The elevent update in the 0.12.* series of Rcpp landed on CRAN yesterday following the initial upload on the weekend, and the Debian package and Windows binaries should follow as usual. The 0.12.11 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, and the 0.12.10.release in March --- making it the fifteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1026 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases follows on the heels of R's 3.4.0 release and addresses on or two issues from the transition, along with a literal boatload of other fixes and enhancements. James "coatless" Balamuta was once restless in making the documentation better, Kirill Mueller addressed a number of more obscure compiler warnings (triggered under under -Wextra and the like), Jim Hester improved excecption handling, and much more mostly by the Rcpp Core team. All changes are listed below in some detail.

One big change that JJ made is that Rcpp Attributes also generate the now-almost-required package registration. (For background, I blogged about this one, two, three times.) We tested this, and do not expect it to throw curveballs. If you have an existing src/init.c, or if you do not have registration set in your NAMESPACE. It should cover most cases. But one never knows, and one first post-release buglet related to how devtools tests things has already been fixed in this PR by JJ.

Changes in Rcpp version 0.12.12 (2017-05-20)
  • Changes in Rcpp API:

    • Rcpp::exceptions can now be constructed without a call stack (Jim Hester in #663 addressing #664).

    • Somewhat spurious compiler messages under very verbose settings are now suppressed (Kirill Mueller in #670, #671, #672, #687, #688, #691).

    • Refreshed the included tinyformat template library (James Balamuta in #674 addressing #673).

    • Added printf-like syntax support for exception classes and variadic templating for Rcpp::stop and Rcpp::warning (James Balamuta in #676).

    • Exception messages have been rewritten to provide additional information. (James Balamuta in #676 and #677 addressing #184).

    • One more instance of Rf_mkString is protected from garbage collection (Dirk in #686 addressing #685).

    • Two exception specification that are no longer tolerated by g++-7.1 or later were removed (Dirk in #690 addressing #689)

  • Changes in Rcpp Documentation:

  • Changes in Rcpp Sugar:

    • Added sugar function trimws (Nathan Russell in #680 addressing #679).
  • Changes in Rcpp Attributes:

    • Automatically generate native routine registrations (JJ in #694)

    • The plugins for C++11, C++14, C++17 now set the values R 3.4.0 or later expects; a plugin for C++98 was added (Dirk in #684 addressing #683).

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function now creates a package registration file provided R 3.4.0 or later is used (Dirk in #692)

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Reproducible builds folks: Reproducible Builds: week 108 in Stretch cycle

24 May, 2017 - 01:43

Here's what happened in the Reproducible Builds effort between Sunday May 14 and Saturday May 20 2017:

News and Media coverage
  • We've reached 94.0% reproducible packages on testing/amd64! (NB. without build path variation)
  • Maria Glukhova was interviewed on It's FOSS about her involvement with Reproducible Builds with respect to Outreachy.
IRC meeting

Our next IRC meeting has been scheduled for Thursday June 1 at 16:00 UTC.

Packages reviewed and fixed, bugs filed, etc.

Bernhard M. Wiedemann:

Chris Lamb:

Reviews of unreproducible packages

35 package reviews have been added, 28 have been updated and 12 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

diffoscope development strip-nondeterminism development tests.reproducible-builds.org

Holger wrote a new systemd-based scheduling system replacing 162 constantly running Jenkins jobs which were slowing down job execution in general:

  • Nothing fancy really, just 370 lines of shell code in two scripts, out of these 370 lines 80 are comments and 162 are node defitions for those 162 "jobs".
  • Worker logs not yet as good as with Jenkins but usually we dont need realitime log viewing of specific builds. Or rather, its a waste of time to do it. (Actual package build logs remain unchanged.)
  • Builds are a lot faster for the fast archs, but not so much difference on armhf.
  • Since April 12 for i386 (and a week later for the rest), the images below are ordered with i386 on top, then amd64, armhf and arm64. Except for armhf it's pretty visible when the switch was made.

Misc.

This week's edition was written by Chris Lamb, Holver Levsen, Bernhard M. Wiedemann, Vagrant Cascadian and Maria Glukhova & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Tianon Gravi: Debuerreotype

23 May, 2017 - 13:00

Following in the footsteps of one of my favorite Debian Developers, Chris Lamb / lamby (who is quite prolific in the reproducible builds effort within Debian), I’ve started a new project based on snapshot.debian.org (time-based snapshots of the Debian archive) and some of lamby’s work for creating reproducible Debian (debootstrap) rootfs tarballs.

The project is named “Debuerreotype” as an homage to the photography roots of the word “snapshot” and the daguerreotype process which was an early method of taking photographs. The essential goal is to create “photographs” of a minimal Debian rootfs, so the name seemed appropriate (even if it’s a bit on the “mouthful” side).

The end-goal is to create and release Debian rootfs tarballs for a given point-in-time (especially for use in Docker) which should be fully reproducible, and thus improve confidence in the provenance of the Debian Docker base images.

For more information about reproducibility and why it matters, see reproducible-builds.org, which has more thorough explanations of the why and how and links to other important work such as the reproducible builds effort in Debian (for Debian package builds).

In order to verify that the tool actually works as intended, I ran builds against seven explicit architectures (amd64, arm64, armel, armhf, i386, ppc64el, s390x) and eight explicit suites (oldstable, stable, testing, unstable, wheezy, jessie, stretch, sid).

I used a timestamp value of 2017-05-16T00:00:00Z, and skipped combinations that don’t exist (such as wheezy on arm64) or aren’t supported anymore (such as wheezy on s390x). I ran the scripts repeatedly over several days, using diffoscope to compare the results.

While doing said testing, I ran across #857803, and added a workaround. There’s also a minor outstanding issue with wheezy’s reproducibility that I haven’t had a chance to dig deep very deeply into yet (but it’s pretty benign and Wheezy’s LTS support window ends 2018-05-31, so I’m not too stressed about it).

I’ve also packaged the tool for Debian, and submitted it into the NEW queue, so hopefully the FTP Masters will look favorably upon this being a tool that’s available to install from the Debian archive as well. 😇

Anyhow, please give it a try, have fun, and as always, report bugs!

Gunnar Wolf: Open Source Symposium 2017

23 May, 2017 - 00:21

I travelled (for three days only!) to Argentina, to be a part of the Open Source Symposium 2017, a co-located event of the International Conference on Software Engineering.

This is, all in all, an interesting although small conference — We are around 30 people in the room. This is a quite unusual conference for me, as this is among the first "formal" academic conference I am part of. Sessions have so far been quite interesting.
What am I linking to from this image? Of course, the proceedings! They managed to publish the proceedings via the "formal" academic channels (a nice hard-cover Springer volume) under an Open Access license (which is sadly not usual, and is unbelievably expensive). So, you can download the full proceedings, or article by article, in EPUB or in PDF...
...Which is very very nice :)
Previous editions of this symposium have also their respective proceedings available, but AFAICT they have not been downloadable.
So, get the book; it provides very interesant and original insights into our community seen from several quite novel angles!

AttachmentSize oss2017_cover.png84.47 KB

Michal &#268;iha&#345;: HackerOne experience with Weblate

22 May, 2017 - 17:00

Weblate has started to use HackerOne Community Edition some time ago and I think it's good to share my experience with that. Do you have open source project and want to get more attention of security community? This post will answer how it looks from perspective of pretty small project.

I've applied with Weblate to HackerOne Community Edition by end of March and it was approved early in April. Based on their recommendations I've started in invite only mode, but that really didn't bring much attention (exactly none reports), so I've decided to go public.

I've asked for making the project public just after coming from two weeks vacation, while expecting the approval to take some time where I'll settle down things which have popped up during vacation. In the end that was approved within single day, so I was immediately under fire of incoming reports:

I was surprised that they didn't lie - you will really get huge amount of issues just after making your project public. Most of them were quite simple and repeating (as you can see from number of duplicates), but it really provided valuable input.

Even more surprisingly there was second peak coming in when I've started to disclose resolved issues (once Weblate 2.14 has been released).

Overall the issues could be divided to few groups:

  • Server configuration such as lack of Content-Security-Policy headers. This is certainly good security practice and we really didn't follow it in all cases. The situation should be way better now.
  • Lack or rate limiting in Weblate. We really didn't try to do that and many reporters (correctly) shown that this is something what should be addressed in important entry points such as authentication. Weblate 2.14 has brought lot of features in this area.
  • Not using https where applicable. Yes, some APIs or web sites did not support https in past, but now they do and I didn't notice.
  • Several pages were vulnerable to CSRF as they were using GET while POST with CSRF protection would be more appropriate.
  • Lack of password strength validation. I've incorporated Django password validation to Weblate hopefully avoiding the weakest passwords.
  • Several issues in authentication using Python Social Auth. I've never really looked at how the authentication works there and there are some questionable decisions or bugs. Some of the bugs were already addressed in current releases, but there are still some to solve.

In the end it was really challenging week to be able to cope with the incoming reports, but I think I've managed it quite well. The HackerOne metrics states that there are 2 hours in average to respond on incoming incidents, what I think will not work in the long term :-).

Anyway thanks to this, you can now enjoy Weblate 2.14 which more secure than any release before, if you have not yet upgraded, you might consider doing that now or look into our support offering for self hosted Weblate.

The downside of this all was that the initial publishing on HackerOne made our website target of lot of automated tools and the web server was not really ready for that. I'm really sorry to all Hosted Weblate users who were affected by this. This has been also addressed now, but the infrastructure really should have been prepared before on this. To share how it looked like, here is number of requests to the nginx server:

I'm really glad I could make Weblate available on HackerOne as it will clearly improve it's security and security of hosted offering we have. I will certainly consider providing swag and/or bounties on further severe reports, but that won't be possible without enough funding for Weblate.

Filed under: Debian English SUSE Weblate

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้