Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 36 sec ago

Michael Prokop: DebConf16 in Capetown/South Africa: Lessons learnt

20 July, 2016 - 03:48

DebConf 16 in Capetown/South Africa was fantastic for many reasons.

My Capetown/South Africa/Culture/Flight related lessons:
  • Avoid flying on Sundays (especially in/from Austria where plenty of hotlines are closed on Sundays or at least not open when you need them)
  • Actually turn back your seat on the flight when trying to sleep and not forget that this option exists *cough*
  • While UCT claims to take energy saving quite serious (e.g. “turn off the lights” mentioned at many places around the campus), several toilets flush all their water, even when trying to do just small™ business and also two big lights in front of a main building seem to be shining all day long for no apparent reason
  • There doesn’t seem to be a standard for the side of hot vs. cold water-taps
  • Soap pieces and towels on several toilets
  • For pedestrians there’s just a very short time of green at the traffic lights (~2-3 seconds), then red blinking lights show that you can continue walking across the street (but *should* not start walking) until it’s fully red again (but not many people seem to care about the rules anyway :))
  • Warning lights of cars are used for saying thanks (compared to hand waving in e.g. Austria)
  • The 40km/h speed limit signs on the roads seem to be showing the recommended minimum speed :-)
  • There are many speed bumps on the roads
  • Geese quacking past 11:00 p.m. close to a sleeping room are something I’m also not used to :-)
  • Announced downtimes for the Internet connection are something I’m not used to
  • WLAN in the dorms of UCT as well as in any other place I went to at UCT worked excellent (measured ~22-26 Mbs downstream in my room, around 26Mbs in the hacklab) (kudos!)
  • WLAN is available even on top of the Table Mountain (WLAN working and being free without any registration)
  • Number26 credit card is great to withdraw money from ATMs without any extra fees from common credit card companies (except for the fee the ATM itself charges but displays ahead on-site anyway)
  • Splitwise is a nice way to share expenses on the road, especially with its mobile app and the money beaming using the Number26 mobile app
My technical lessons from DebConf16:
  • ran into way too many yak-shaving situations, some of them might warrant separate blog posts…
  • finally got my hands on gbp-pq (manage quilt patches on patch queue branches in git): very nice to be able to work with plain git and then get patches for your changes, also having upstream patches (like cherry-picks) inside debian/patches/ and the debian specific changes inside debian/patches/debian/ is a lovely idea, this can be easily achieved via “Gbp-Pq: Topic debian” with gbp’s pq and is used e.g. in pkg-systemd, thanks to Michael Biebl for the hint and helping hand
  • David Bremner’s gitpkg/git-debcherry is something to also be aware of (thanks for the reminder, gregoa)
  • autorevision: extracts revision metadata from your VCS repository (thanks to pabs)
  • blhc: build log hardening check
  • Guido’s gbp skills exchange session reminded me once again that I should use `gbp import-dsc –download $URL_TO_DSC` more often
  • sources.debian.net features specific copyright + patches sections (thanks, Matthieu Caneill)
  • dpkg-mergechangelogs(1) for 3-way merge of debian/changelog files (thanks, buxy)
  • meta-git from pkg-perl is always worth a closer look
  • ifupdown2 (its current version is also available in jessie-backports!) has some nice features, like `ifquery –running $interface` to get the life configuration of a network interface, json support (`ifquery –format=json …`) and makotemplates support to generate configuration for plenty of interfaces

BTW, thanks to the video team the recordings from the sessions are available online.

Joey Hess: Re: Debugging over email

19 July, 2016 - 23:57

Lars wrote about the remote debugging problem.

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this.

This is also something I've thought about on and off, that affects me most every day.

I've found that building the test suite into the program, such that users can run it at any time, is a great way to smoke out problems. If a user thinks they have problem A but the test suite explodes, or also turns up problems B C D, then I have much more than the user's problem report to go on. git annex test is a good example of this.

Asking users to provide a recipe to reproduce the bug is very helpful; I do it in the git-annex bug report template, and while not all users do, and users often provide a reproducion recipe that doesn't quite work, it's great in triage to be able to try a set of steps without thinking much and see if you can reproduce the bug. So I tend to look at such bug reports first, and solve them more quickly, which tends towards a virtuous cycle.

I've noticed that reams of debugging output, logs, test suite failures, etc can be useful once I'm well into tracking a problem down. But during triage, they make it harder to understand what the problem actually is. Information overload. Being able to reproduce the problem myself is far more valuable than this stuff.

I've noticed that once I am in a position to run some commands in the environment that has the problem, it seems to be much easier to solve it than when I'm trying to get the user to debug it remotely. This must be partly psychological?

Partly, I think that the feeling of being at a remove from the system, makes it harder to think of what to do. And then there are the times where the user pastes some output of running some commands and I mentally skip right over an important part of it. Because I didn't think to run one of the commands myself.

I wonder if it would be helpful to have a kind of ssh equivilant, where all commands get vetted by the remote user before being run on their system. (And the user can also see command output before it gets sent back, to NACK sending of personal information.) So, it looks and feels a lot like you're in a mosh session to the user's computer (which need not have a public IP or have an open ssh port at all), although one with a lot of lag and where rm -rf / doesn't go through.

Lars Wirzenius: Debugging over email

19 July, 2016 - 22:08

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this. I've been thinking about this for a while, and talking with friends about it, and here's my current ideas.

First idea: have a script that gathers as much information as possible, which the user can run. For example, log files, full configuration, full environment, etc. The user would then mail the output to me. The information will need to be anonymised suitably so that no actual secrets are leaked. This would be similar to Debian's package specific reportbug scripts.

Second idea: make it less likely that the user needs help solving their issue, with better error messages. This would require error messages to have sufficient explanation that a user can solve their problem. That doesn't necessarily mean a lot of text, but also code that analyses the situation when the error happens to include things that are relevant for the problem resolving process, and giving error messages that are as specific as possible. Example: don't just fail saying "write error", but make the code find out why writing caused an error.

Third idea: in addition to better error messages, might provide diagnostics tools as well.

A friend suggested having a script that sets up a known good set of operations and verifies they work. This would establish a known-working baseline, or smoke test, so that we can rule things like "software isn't completely installed".

Do you have ideas? Mail me (liw@liw.fi) or tell me on identi.ca (@liw) or Twitter (@larswirzenius).

Dirk Eddelbuettel: Rcpp 0.12.6: Rolling on

19 July, 2016 - 19:49

The sixth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.6 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, and the 0.12.5 release in May --- making it the tenth release at the steady bi-montly release frequency. Just like the previous release, this one is once again more of a refining maintenance release which addresses small bugs, nuisances or documentation issues without adding any major new features. That said, some nice features (such as caching support for sourceCpp() and friends) were added.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 703 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by about fourty packages from the last release in May!

Similar to the previous releases, we have contributions from first-time committers. Artem Klevtsov made na_omit run faster on vectors without NA values. Otherwise, we had many contributions from "regulars" like Kirill Mueller, James "coatless" Balamuta and Dan Dillon as well as from fellow Rcpp Core contributors. Some noteworthy highlights are encoding and string fixes, generally more robust builds, a new iterator-based approach for vectorized programming, the aforementioned caching for sourceCpp(), and several documentation enhancements. More details are below.

Changes in Rcpp version 0.12.6 (2016-07-18)
  • Changes in Rcpp API:

    • The long long data type is used only if it is available, to avoid compiler warnings (Kirill Müller in #488).

    • The compiler is made aware that stop() never returns, to improve code path analysis (Kirill Müller in #487 addressing issue #486).

    • String replacement was corrected (Qiang in #479 following mailing list bug report by Masaki Tsuda)

    • Allow for UTF-8 encoding in error messages via RCPP_USING_UTF8_ERROR_STRING macro (Qin Wenfeng in #493)

    • The R function Rf_warningcall is now provided as well (as usual without leading Rf_) (#497 fixing #495)

  • Changes in Rcpp Sugar:

    • Const-ness of min and max functions has been corrected. (Dan Dillon in PR #478 fixing issue #477).

    • Ambiguities for matrix/vector and scalar operations have been fixed (Dan Dillon in PR #476 fixing issue #475).

    • New algorithm header using iterator-based approach for vectorized functions (Dan in PR #481 revisiting PR #428 and addressing issue #426, with futher work by Kirill in PR #488 and Nathan in #503 fixing issue #502).

    • The na_omit() function is now faster for vectors without NA values (Artem Klevtsov in PR #492)

  • Changes in Rcpp Attributes:

    • Add cacheDir argument to sourceCpp() to enable caching of shared libraries across R sessions (JJ in #504).

    • Code generation now deals correctly which packages containing a dot in their name (Qiang in #501 fixing #500).

  • Changes in Rcpp Documentation:

    • A section on default parameters was added to the Rcpp FAQ vignette (James Balamuta in #505 fixing #418).

    • The Rcpp-attributes vignette is now mentioned more prominently in question one of the Rcpp FAQ vignette.

    • The Rcpp Quick Reference vignette received a facelift with new sections on Rcpp attributes and plugins begin added. (James Balamuta in #509 fixing #484).

    • The bib file was updated with respect to the recent JSS publication for RProtoBuf.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Chris Lamb: Python quirk: os.stat's return type

19 July, 2016 - 17:20
import os
import stat

st = os.stat('/etc/fstab')

# __getitem__
x = st[stat.ST_MTIME]
print((x, type(x)))

# __getattr__
x = st.st_mtime
print((x, type(x)))
(1441565864, <class 'int'>)
(1441565864.3485234, <class 'float'>)

John Goerzen: Building a home firewall: review of pfsense

19 July, 2016 - 04:34

For some time now, I’ve been running OpenWRT on an RT-N66U device. I initially set that because I had previously been using my Debian-based file/VM server as a firewall, and this had some downsides: every time I wanted to reboot that, Internet for the whole house was down; shorewall took a fair bit of care and feeding; etc.

I’ve been having indications that all is not well with OpenWRT or the N66U in the last few days, and some long-term annoyances prompted me to search out a different solution. I figured I could buy an embedded x86 device, slap Debian on it, and be set.

The device I wound up purchasing happened to have pfsense preinstalled, so I thought I’d give it a try.

As expected, with hardware like that to work with, it was a lot more capable than OpenWRT and had more features. However, I encountered a number of surprising issues.

The biggest annoyance was that the system wouldn’t allow me to set up a static DHCP entry with the same IP for multiple MAC addresses. This is a very simple configuration in the underlying DHCP server, and OpenWRT permitted it without issue. It is quite useful so my laptop has the same IP whether connected by wifi or Ethernet, and I have used it for years with no issue. Googling it a bit turned up some rather arrogant pfsense people saying that this is “broken” and poor design, and that your wired and wireless networks should be on different VLANs anyhow. They also said “just give it the same hostname for the different IPs” — but it rejects this too. Sigh. I discovered, however, that downloading the pfsense backup XML file, editing the IP within, and re-uploading it gets me what I want with no ill effects!

So then I went to set up DNS. I tried to enable the “DNS Forwarder”, but it wouldn’t let me do that while the “DNS Resolver” was still active. Digging in just a bit, it appears that the DNS Forwarder and DNS Resolver both provide forwarding and resolution features; they just have different underlying implementations. This is not clear at all in the interface.

Next stop: traffic shaping. Since I use VOIP for work, this is vitally important for me. I dove in, and found a list of XML filenames for wizards: one for “Dedicated Links” and another for “Multiple Lan/Wan”. Hmmm. Some Googling again turned up that everyone suggests using the “Multiple Lan/Wan” wizard. Fine. I set it up, and notice that when I start an upload, my download performance absolutely tanks. Some investigation shows that outbound ACKs aren’t being handled properly. The wizard had created a qACK queue, but neglected to create a packet match rule for it, so ACKs were not being dealt with appropriately. Fixed that with a rule of my own design, and now downloads are working better again. I also needed to boost the bandwidth allocated to qACK (setting it to 25% seemed to do the trick).

Then there was the firewall rules. The “interface” section is first-match-wins, whereas the “floating” section is last-match-wins. This is rather non-obvious.

Getting past all the interface glitches, however, the system looks powerful, solid, and well-engineered under the hood, and fairly easy to manage.

Reproducible builds folks: Preparing for the second release of reprotest

18 July, 2016 - 22:51

Author: ceridwen

I now have working test environments set up for null (no container, build on the host system), schroot, and qemu. After fixing some bugs, null and qemu now pass all their tests!

schroot still has a permission error related to disorderfs. Since the same code works for null and qemu and for schroot when disorderfs is disabled, it's something specific to disorderfs and/or its combination with schroot. The following is debug output that shows ls for the build directory on the testbed before and after the mock build, and stat for both the build directory and the mock build artifact itself. The first control run, without disorderfs, succeeds:

test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/control/ ;\n python3 mock_build.py ;\n'], kind short, sout raw, serr pipe, env ['LANG=en_US.UTF-8', 'HOME=/nonexistent/first-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'PYTHONHASHSEED=559200286', 'TZ=GMT+12']
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 56h/86d Inode: 1351634     Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.105915342 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: fc01h/64513d    Inode: 40767795    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.089915352 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/control/artifact host /tmp/tmpw_mwks82/control_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/control/artifact is not already at destination /tmp/tmpw_mwks82/control_artifact, copying
test.py: DBG: got reply from testbed: ok

That last bit indicates that copy command for the build artifact from the testbed to a temporary directory on the host succeeded. This is the debug output from the second run, with disorderfs enabled:

test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/disorderfs/ ;\n umask 0002 ;\n linux64 --uname-2.6 python3 mock_build.py ;\n'], kind short, sout raw, serr pipe, env ['LC_ALL=fr_CH.UTF-8', 'CAPTURE_ENVIRONMENT=i_capture_the_environment', 'HOME=/nonexistent/second-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/i_capture_the_path', 'LANG=fr_CH.UTF-8', 'PYTHONHASHSEED=559200286', 'TZ=GMT-14']
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43 mock_build.py
-rwxr--r-- 1 user user  175 Jun  3 15:42 mock_failure.py
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 tests.py
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 58h/88d Inode: 1           Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.201915291 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: 58h/88d Inode: 7           Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.185915299 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: -
test.py: DBG: testbed command exited with code 0
test.py: DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/disorderfs/artifact host /tmp/tmpw_mwks82/experiment_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact is not already at destination /tmp/tmpw_mwks82/experiment_artifact, copying
schroot: DBG: cleanup...
schroot: DBG: execute-timeout: schroot --run-session --quiet --directory=/ --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52 --user=root -- rm -rf -- /tmp/autopkgtest.5oMipL
rm: cannot remove '/tmp/autopkgtest.5oMipL/disorderfs': Device or resource busy
schroot: DBG: execute-timeout: schroot --quiet --end-session --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52
Unexpected error:
Traceback (most recent call last):
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 708, in mainloop
    command()
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 646, in command
    r = f(c, ce)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 584, in cmd_copyup
    copyupdown(c, ce, True)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 469, in copyupdown
    copyupdown_internal(ce[0], c[1:], upp)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 494, in copyupdown_internal
    copyup_shareddir(sd[0], sd[1], dirsp, downtmp_host)
  File "~/code/reprotest/reprotest/lib/VirtSubproc.py", line 408, in copyup_shareddir
    shutil.copy(tb, host)
  File "/usr/lib/python3.5/shutil.py", line 235, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/lib/python3.5/shutil.py", line 114, in copyfile
    with open(src, 'rb') as fsrc:
PermissionError: [Errno 13] Permission denied: '/var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact'

ls shows that the artifact is created in the right place. However, when reprotest tries to copy it from the testbed to the host, it gets a permission error. The traceback is coming from virt/schroot, and it's a Python open() call that's failing. Note that the permissions are wrong for the second run, but that's expected because my schroot is stable so the umask bug isn't fixed yet; and that the rm error from disorderfs not being unmounted early enough (see below). I expect to see the umask test fail, though, not a crash in every test where the build succeeds.

After a great deal of effort, I isolated the bug that was causing the process to hang not to my code or autopkgtest's code, but to CPython and contextlib. It's supposed to be fixed in CPython 3.5.3, but for now I've worked around the problem by monkey-patching the patch provided in the latter issue onto contextlib.

Here is my current to-do list:

  • Fix PyPi not installing the virt/ scripts correctly.

  • Move the disorderfs unmount into the shell script. (When the virt/ scripts encounter an error, they try to delete a temporary directory, which fails if disorderfs is mounted, so the script needs to unmount it before that happens.)

  • Find and fix the schroot/disorderfs permission error bug.

  • Convert my notes on setting up for the tests into something useful for users.

  • Write scripts to synch version numbers and documentation.

  • Fix the headers in the autopkgtest code to conform to reprotest style.

  • Add copyright information for the contextlib monkey-patch and the autopkgtest files I've changed.

  • Close #829113 as wontfix.

And here are the questions I'd like to resolve before the second release:

  • Is there any other documentation that's essential? Finishing the documentation will come later.

  • Should I release before finishing the rest of the variation? This will slow down the release of the first version with something resembling full fuctionality.

  • Do I need to write a chroot test now? Given the duplication with schroot, I'm unconvinced this is worthwhile.

Lisandro Damián Nicanor Pérez Meyer: KDEPIM ready to be more broadly tested

18 July, 2016 - 06:15
As was posted a couple of weeks ago, the latest version of KDE has been uploaded to unstable.

All packages are now uploaded and built and we believe this version is ready to be more broadly tested.

If you run unstable but have refrained from installing the kdepim packages up to now, we would appreciate it if you go ahead and install them now, reporting any issues that you may find.

Given that this is a big update that includes quite a number of plugins and libraries, it's strongly recommended that you restart your KDE session after updating the packages.

Happy hacking,

The Debian Qt/KDE Team.

Iustin Pop: Energy bar restored!

18 July, 2016 - 04:32

So, I've been sick. Quite sick, as for the past ~2 weeks I wasn't able to bike, run, work or do much beside watch movies, look at photos and play some light games (ARPGs rule in this case, all you need to do is keep the left mouse button pressed).

It was supposed to be only a light viral infection, but it took longer to clear out than I expected, probably due to it happening right after my dental procedure (and possibly me wanting to restart exercise too soon, to fast). Not fun, it felt like the thing that refills your energy/mana bar in games broke. I simply didn't feel restored, despite sleeping a lot; 2-3 naps per day sound good as long as they are restorative, if they're not, sleeping is just a chore.

The funny thing is that recovery happened so slow, that when I finally had energy it took me by surprise. It was like “oh, wait, I can actually stand and walk without feeling dizzy! Wohoo!” As such, yesterday was a glorious Saturday ☺

I was therefore able to walk a bit outside the house this weekend and feel like having a normal cold, not like being under a “cursed: -4 vitality” spell. I expect the final symptoms to clear out soon, and that I can very slowly start doing some light exercise again. Not tomorrow, though…

In the meantime, I'm sharing a picture from earlier this year that I found while looking through my stash. Was walking in the forest in Pontresina on a beatiful sunny day, when a sudden gust of wind caused a lot of the snow on the trees to fly around and make it look a bit magical (photo is unprocessed beside conversion from raw to jpeg, this is how it was straight out of the camera):

Why a winter photo? Because that's exactly how cold I felt the previous weekend: 30°C outside, but I was going to the doctor in jeans and hoodie and cap, shivering…

Michael Stapelberg: mergebot: easily merging contributions

17 July, 2016 - 19:00

Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload.

I wondered how close we can bring Debian to a model where accepting a contribution is just a single click as well. In principle, I think it can be done.

To demonstrate the feasibility and collect some feedback, I wrote a program called mergebot. The first stage is done: mergebot can be used on your local machine as a command-line tool. You provide it with the source package and bug number which contains the patch in question, and it will do the rest:

midna ~ $ mergebot -source_package=wit -bug=#831331
2016/07/17 12:06:06 will work on package "wit", bug "831331"
2016/07/17 12:06:07 Skipping MIME part with invalid Content-Disposition header (mime: no media type)
2016/07/17 12:06:07 gbp clone --pristine-tar git+ssh://git.debian.org/git/collab-maint/wit.git /tmp/mergebot-743062986/repo
2016/07/17 12:06:09 git config push.default matching
2016/07/17 12:06:09 git config --add remote.origin.push +refs/heads/*:refs/heads/*
2016/07/17 12:06:09 git config --add remote.origin.push +refs/tags/*:refs/tags/*
2016/07/17 12:06:09 git config user.email stapelberg AT debian DOT org
2016/07/17 12:06:09 patch -p1 -i ../latest.patch
2016/07/17 12:06:09 git add .
2016/07/17 12:06:09 git commit -a --author Chris Lamb <lamby AT debian DOT org> --message Fix for “wit: please make the build reproducible” (Closes: #831331)
2016/07/17 12:06:09 gbp dch --release --git-author --commit
2016/07/17 12:06:09 gbp buildpackage --git-tag --git-export-dir=../export --git-builder=sbuild -v -As --dist=unstable
2016/07/17 12:07:16 Merge and build successful!
2016/07/17 12:07:16 Please introspect the resulting Debian package and git repository, then push and upload:
2016/07/17 12:07:16 cd "/tmp/mergebot-743062986"
2016/07/17 12:07:16 (cd repo && git push)
2016/07/17 12:07:16 (cd export && debsign *.changes && dput *.changes)

midna ~ $ cd /tmp/mergebot-743062986/repo
midna /tmp/mergebot-743062986/repo $ git log HEAD~2..
commit d983d242ee546b2249a866afe664bac002a06859
Author: Michael Stapelberg <stapelberg AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Update changelog for 2.31a-3 release

commit 5a327f5d66e924afc656ad71d3bfb242a9bd6ddc
Author: Chris Lamb <lamby AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Fix for “wit: please make the build reproducible” (Closes: #831331)
midna /tmp/mergebot-743062986/repo $ git push
Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 1.59 KiB | 0 bytes/s, done.
Total 11 (delta 6), reused 0 (delta 0)
remote: Sending notification emails to: dispatch+wit_vcs@tracker.debian.org
remote: Sending notification emails to: dispatch+wit_vcs@tracker.debian.org
To git+ssh://git.debian.org/git/collab-maint/wit.git
   650ee05..d983d24  master -> master
 * [new tag]         debian/2.31a-3 -> debian/2.31a-3
midna /tmp/mergebot-743062986/repo $ cd ../export
midna /tmp/mergebot-743062986/export $ debsign *.changes && dput *.changes
[…]
Uploading wit_2.31a-3.dsc
Uploading wit_2.31a-3.debian.tar.xz
Uploading wit_2.31a-3_amd64.deb
Uploading wit_2.31a-3_amd64.changes

Of course, this is not quite as convenient as clicking a “Merge” button yet. I have some ideas on how to make that happen, but I need to know whether people are interested before I spend more time on this.

Please see github.com/Debian/mergebot for more details, and please get in touch if you think this is worthwhile or would even like to help. Feedback is accepted in the GitHub issue tracker for mergebot or the project mailing list mergebot-discuss. Thanks!

Vasudev Kamath: Switching from approx to apt-cacher-ng

17 July, 2016 - 18:00

After a long ~5 years (from 2011) journey with approx I finally wanted to switch to something new like apt-cacher-ng. And after a bit of changes I finally managed to get apt-cacher-ng into my work flow.

Bit of History

I should first give you a brief on how I started using approx. It all started in MiniDebconf 2011 which I organized at my Alma-mater. I met Jonas Smedegaard here and from him I learned about approx. Jonas has a bunch of machines at his home and he was active user of approx and he showed it to me while explaining the Boxer project. I was quite impressed with approx. Back then I was using a 230kbps slow INTERNET connection and I was also maintaining a couple of packages in Debian. Updating the pbuilder chroots was time consuming task for me as I had to download multiple times over slow net. And approx largely solved this problem and I started using it.

5 years fast forward I now have quite fast INTERNET with good FUP. (About 50GB a month), but I still tend to use approx which makes building packages quite faster. I also use couple of containers on my laptop which all use my laptop as approx cache.

Why switch?

So why change to apt-cacher-ng?. Approx is a simple tool, it runs mainly with inetd and sits between apt and the repository on INTERNET. Where as apt-cacher-ng provides a lot of features. Below are some listed from the apt-cacher-ng manual.

  • use of TLS/SSL repositories (may be possible with approx but I'm notsure how to do it)
  • Access control of who can access caching server
  • Integration with debdelta (I've not tried, approx also supports debdelta)
  • Avoiding use of apt-cacher-ng for some hosts
  • Avoiding caching of some file types
  • Partial mirroring for offline usage.
  • Selection of ipv4 or ipv6 for connections.

The biggest change I see is the speed difference between approx and apt-cacher-ng. I think this is mainly because apt-cacher-ng is threaded where as approx runs using inetd.

I do not want all features of apt-cacher-ng at the moment, but who knows in future I might need some features and hence I decided to switch to apt-cacher-ng over approx.

Transition

Transition from approx to apt-cacher-ng was smoother than I expected. There are 2 approaches you can use one is explicit routing another is transparent routing. I prefer transparent routing and I only had to change my /etc/apt/sources.list to use the actual repository URL.

deb http://deb.debian.org/debian unstable main contrib non-free
deb-src http://deb.debian.org/debian unstable main

deb http://deb.debian.org/debian experimental main contrib non-free
deb-src http://deb.debian.org/debian experimental main

After above change I had to add a 01proxy configuration file to /etc/apt/apt.conf.d/ with following content.

Acquire::http::Proxy "http://localhost:3142/"

I use explicit routing only when using apt-cacher-ng with pbuilder and debootstrap. Following snippet shows explicit routing through /etc/apt/sources.list.

deb http://localhost:3142/deb.debian.org/debian unstable main
Usage with pbuilder and friends

To use apt-cacher-ng with pbuilder you need to modify /etc/pbuilderrc to contain following line

MIRRORSITE=http://localhost:3142/deb.debian.org/debian
Usage with debootstrap

To use apt-cacher-ng with debootstrap, pass MIRROR argument of debootstrap as http://localhost:3142/deb.debian.org/debian.

Conclusion

I've now completed full transition of my work flow to apt-cacher-ng and purged approx and its cache.

Though it works fine I feel that there will be 2 caches created when you use transparent and explicit proxy using localhost:3142 URL. I'm sure it is possible to configure this to avoid duplication, but I've not yet figured it. If you know how to fix this do let me know. Update

Jonas told me that its not 2 caches but 2 routing one for transparent routing and another for explicit routing. So I guess there is nothing here to fix :-).

Neil Williams: Deprecating dpkg-cross

17 July, 2016 - 17:30
Deprecating the dpkg-cross binary

After a discussion in the cross-toolchain BoF at DebConf16, the gross hack which is packaged as the dpkg-cross binary package and supporting perl module have finally been deprecated, long after multiarch was actually delivered. Various reasons have complicated the final steps for dpkg-cross and there remains one use for some of the files within the package although not the dpkg-cross binary itself.

2.6.14 has now been uploaded to unstable and introduces a new binary package cross-config, so will spend a time in NEW. The changes are summarised in the NEWS entry for 2.6.14.

The cross architecture configuration files have moved to the new cross-config package and the older dpkg-cross binary with supporting perl module are now deprecated. Future uploads will only include the cross-config package.

Use cross-config to retain support for autotools and CMake cross-building configuration.

If you use the deprecated dpkg-cross binary, now is the time to migrate away from these path changes. The dpkg-cross binary and the supporting perl module should NOT be expected to be part of Debian by the time of the Stretch release.

2.6.14 also marks the end of my involvement with dpkg-cross. The Uploaders list has been shortened but I'm still listed to be able to get 2.6.14 into NEW. A future release will drop the perl module and the dpkg-cross binary, retaining just the new cross-config package.

Valerie Young: Work after DebConf

17 July, 2016 - 08:42

First week after DebCamp and DebConf! Both were incredible — the debian project and it’s contributors never fail to impress and delight me. None the less it felt great to have a few quiet, peaceful days of uninterrupted programming.

Notes about last week:

1. Finished Mattia’s final suggestions for the conversion of the package set pages script to python.

Hopefully it will be deployed soon, awaiting final approval

2. Replace the bash code that produced the left navigation on the home page (and most other pages) with the mustache template the python scripts use.

Previously, html was constructed and spat out from both a python and shell script — now we have a single, DRY mustache template. (At the top of the bash function that produced the navigation html, you will find the comment: “this is really quite incomprehensible and should be killed, the solution is to write all html pages with python…”. Turns out the intermediate solution is to use templates )

3. Thought hard about navigation of the test website, and redesigned (by rearranging) links in the left hand navigation.

After code review, you will see these changes as well! Things to look forward to include:
– The top left is a link to the Debian dashboard on every page (except the package specific pages),
– The title of each page (except the package pages) stretches across the whole page (instead of being squashed into the upper left).
– Hover text has been added to most links in the left navigation.
– Links in left navigation have been reordered, and headers added.

Once you see the changes, please let me know if you think anything is unintuitive or confusion, everything can be easily changed!

4. Cross suite and architecture navigation enabled for most pages.

For most pages, you will be one click away from seeing the same statistics for a different suite or architecture! Whoo!

Notes about next week:

Last week I got carried away imagining minor improvements that can be made to the test websites UI, and I now have a backlog of ideas I’d like to implement. I’ve begun editing the script that makes most of the pages with statistics or package list (for example, all packages with notes, or all recently tested packages) to used templates and contain a bit more descriptive text. I’d also like to do a some revamping of the package set pages I converted.

These addition UI changes will be my first tasks for the coming week — since they are fresh on my mind and I’m quite excited about them. The following week I’d like to get back to extensibility and database issues mentioned previously!

Paul Tagliamonte: The Open Source License API

17 July, 2016 - 02:30

Around a year ago, I started hacking together a machine readable version of the OSI approved licenses list, and casually picking parts up until it was ready to launch. A few weeks ago, we officially announced the osi license api, which is now live at api.opensource.org.

I also took a whack at writing a few API bindings, in Python, Ruby, and using the models from the API implementation itself in Go. In the following few weeks, Clint wrote one in Haskell, Eriol wrote one in Rust, and Oliver wrote one in R.

The data is sourced from a repo on GitHub, the licenses repo under OpenSourceOrg. Pull Requests against that repo are wildly encouraged! Additional data ideas, cleanup or more hand collected data would be wonderful!

In the meantime, use-cases for using this API range from language package managers pulling OSI approval of a licence programatically to using a license identifier as defined in one dataset (SPDX, for exampele), and using that to find the identifer as it exists in another system (DEP5, Wikipedia, TL;DR Legal).

Patches are hugly welcome, as are bug reports or ideas! I'd also love more API wrappers for other languages!

Rapha&#235;l Hertzog: Freexian’s report about Debian Long Term Support, June 2016

16 July, 2016 - 13:31

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 158.25 work hours have been dispatched among 11 paid contributors. Their reports are available:

DebConf 16 Presentation

If you want to know more about how the LTS project is organized, you can watch the presentation I gave during DebConf 16 in Cape Town.

Evolution of the situation

The number of sponsored hours increased a little bit at 135 hours per month thanks to 3 new sponsors (Laboratoire LEGI – UMR 5519 / CNRS, Quarantainenet BV, GNI MEDIA). Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file lists 38 packages awaiting an update.

Thanks to our sponsors

New sponsors are in bold.

Lars Wirzenius: Two-factor auth for local logins in Debian using U2F keys

15 July, 2016 - 18:19

Warning: This blog post includes instructions for a procedure that can lead you to lock yourself out of your computer. Even if everything goes well, you'll be hunted by dragons. Keep backups, have a rescue system on a USB stick, and wear flameproof clothing. Also, have fun, and tell your loved ones you love them.

I've recently gotten two U2F keys. U2F is a open standard for authentication using hardware tokens. It's probably mostly meant for website logins, but I wanted to have it for local logins on my laptop running Debian. (I also offer a line of stylish aluminium foil hats.)

Having two-factor authentication (2FA) for local logins improves security if you need to log in (or unlock a screen lock) in a public or potentially hostile place, such as a cafe, a train, or a meeting room at a client. If they have video cameras, they can film you typing your password, and get the password that way.

If you set up 2FA using a hardware token, your enemies will also need to lure you into a cave, where a dragon will use a precision flame to incinerate you in a way that leaves the U2F key intact, after which your enemies steal the key, log into your laptop and leak your cat GIF collection.

Looking up information for how to set this up, I found a blog post by Sean Brewer, for Ubuntu 14.04. That got me started. Here's what I understand:

  • PAM is the technology in Debian for handling authentication for logins and similar things. It has a plugin architecture.

  • Yubico (maker of Yubikeys) have written a PAM plugin for U2F. It is packaged in Debian as libpam-u2f. The package includes documentation in /usr/share/doc/libpam-u2f/README.gz.

  • By configuring PAM to use libpam-u2f, you can require both password and the hardware token for logging into your machine.

Here are the detailed steps for Debian stretch, with minute differences from those for Ubuntu 14.04. If you follow these, and lock yourself out of your system, it wasn't my fault, you can't blame me, and look, squirrels! Also not my fault if you don't wear sufficient protection against dragons.

  1. Install pamu2fcfg and libpam-u2f.
  2. As your normal user, mkdir ~/.config/Yubico. The list of allowed U2F keys will be put there.
  3. Insert your U2F key and run pamu2fcfg -u$USER > ~/.config/Yubico/u2f_keys, and press the button on your U2F key when the key is blinking.
  4. Edit /etc/pam.d/common-auth and append the line auth required pam_u2f.so cue.
  5. Reboot (or at least log out and back in again).
  6. Log in, type in your password, and when prompted and the U2F key is blinking, press its button to complete the login.

pamu2fcfg reads the hardware token and writes out its identifying data in a form that the PAM module understands; see the pam-u2f documentation for details. The data can be stored in the user's home directory (my preference) or in /etc/u2f_mappings.

Once this is set up, anything that uses PAM for local authentication (console login, GUI login, sudo, desktop screen lock) will need to use the U2F key as well. ssh logins won't.

Next, add a second key to your u2f_keys. This is important, because if you lose your first key, or it's damaged, you'll otherwise have no way to log in.

  1. Insert your second U2F key and run pamu2fcfg -n > second, and press the second key's button when prompted.
  2. Edit ~/.config/Yubico/u2f_keys and append the output of second to the line with your username.
  3. Verify that you can log in using your second key as well as the first key. Note that you should have only one of the keys plugged in at the same time when logging in: the PAM module wants the first key it finds so you can't test both keys plugged in at once.

This is not too difficult, but rather fiddly, and it'd be nice if someone wrote at least a way to manage the list of U2F keys in a nicer way.

Ritesh Raj Sarraf: Fully SSL for my website

15 July, 2016 - 17:24

I finally made full switch to SSL for my website. Thanks to this simple howto on Let's Encrypt. I had to use the upstream git repo though. The Debian packaged tool, letsencrypt.sh, did not have enough documentation/pointers in place. And finally, thanks to the Let's Encrypt project as a whole.

PS: http is now redirected to https. I hope nothing really breaks externally.

Categories: Keywords: Like: 

Andrew Cater: Who wrote Hello world

15 July, 2016 - 05:43
Who wrote "Hello, world" ?Rereading Kernighan and Ritchie's classic book on C - https://en.wikipedia.org/wiki/The_C_Programming_Language - almost the first thing you find is the listing for hello world. The comments make it clear that this is a commonplace - the sort of program that every programmer writes as a first test - the new computer works, the compiler / interpreter produces useful output and so on. It' s the classic, canonical thing to do.
A long time back, I got asked whether programming was an art or a science: it's both, but most of all it's only good insofar as it's shared and built on. I used hello world as an example: you can write hello world. You decide to add different text - a greeting (Hej! / ni hao / Bonjour tout le monde! )for friends. 
You discover at / cron / anacron - now you can schedule reminders "It's midnight - do you know where your code is?" "Go to bed, you have school tomorrow"
You can discover how to code for a graphical environment: how to build a test framework around it to check that it _only_ prints hello world and doesn't corrupt other memory ... the uses are endless if it sparks creativity.
If you feel like it, you can share your version - and learn from others. Write it in different languages - there's the analogous 99 bottles of beer site showing how to count and use different languages at www.99-bottles-of-beer.net
Not everyone will get it: not everyone will see it but everyone needs the opportunity 
Everyone needs the chance to share and make use of the commons, needs to be able to feel part of this 
Needs to be included: needs to feel that this is part of common heritage. If you work for an employer: get them to contribute code / money / resources - even if it's as a charitable donation or to offset against taxes
If you work for a government: get them to use Free/Libre/Open Source products
If you work for a hosting company / ISP - get them to donate bandwidth for schools/coding clubs.
Give your time, effort, expertise to help: you gained from others, help others gain
If you work for an IT manufacturer - get them to think of FLOSS as the norm, not the exception

 

 

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้