Planet Debian

Subscribe to Planet Debian feed
Planet Debian -
Updated: 18 min 40 sec ago

Paul Wise: DebCamp16 day 6

30 June, 2016 - 03:49

Redirect one person contacting the Debian sysadmin and web teams to Debian user support. Review wiki RecentChanges. Usual spam reporting. Check and fix a derivatives census issue. Suggest sending the titanpad maintainence issue to a wider audience. Update check-all-the-things and copyright review tools wiki page for licensecheck/devscripts split. Ask if debian-debug could be added to Discuss more about the devscripts/licensecheck split. Yesterday I grrred at Debian perl bug #588017 that causes vulnerabilities in check-all-the-things, tried to figure out the scope of the issue and workaround all of the issues I could find. (Perls are shiny and Check All The thingS can be abbreviated as cats) Today I confirmed with the reporter (Jakub Wilk) that the patch mitigates this. Release check-all-the-things to Debian unstable (finally!!). Discuss with the borg about syncing cats to Ubuntu. Notice autoconf/automake being installed as indirect cats build-deps (via debhelper/dh-autoreconf) and poke relevant folks about this. Answer question about alioth vs LDAP.

Gunnar Wolf: Batch of the Next Thing Co.'s C.H.I.P. computers on its way to DebConf!)

30 June, 2016 - 03:28

Hello world!

I'm very happy to inform that the Next Thing Co. has shipped us a pack of 50 C.H.I.P. computers to be given away at DebConf! What is the C.H.I.P.? As their tagline says, it's the world's first US$9 computer. Further details:

All in all, it's a nice small ARM single-board computer; I won't bore you on this mail with tons of specs; suffice to say they are probably the most open ARM system I've seen to date.

So, I agreed with Richard, our contact at the company, I would distribute the machines among the DebConf speakers interested in one. Of course, not every DebConf speaker wants to fiddle with an adorable tiny piece of beautiful engineering, so I'm sure I'll have some spare computers to give out to other interested DebConf attendees. We are supposed to receive the C.H.I.P.s by Monday 4; if you want to track the package shipment, the DHL tracking number is 1209937606. Don't DDoS them too hard!

So, please do mail me telling why do you want one, what your projects are with it. My conditions for this giveaway are:

  • I will hand out the computers by Thursday 7.
  • Preference goes to people giving a talk. I will "line up" requests on two queues, "speaker" and "attendee", and will announce who gets one in a mail+post to this list on the said date.
  • With this in mind, I'll follow a strict "first come, first served".

To sign up for yours, please mail - I will capture mail sent to that alias ONLY.

Olivier Grégoire: Sixth week: and the framerates appear....

29 June, 2016 - 23:57

Last week, I worked on link the D_BUS with the client and implement MAP on my callback smartInfo to push all my information in only one parameter.


This week I wanted to display my first real information in my client. I choose to begin with the frame rate to help another person in Savoir Faire Linux who work on the encoding and decoding of the video. So I worked on the video part of the daemon trying to understand how the daemon sends frames to LRC. All the frames are decoded in the same place so I just pull the number of frames per second for the local and remote video and put it on my callback smartInfo.


Next week, I will change my architecture a little bit. Ring can handle several calls with different information and my implementation don't take care of this. I need to search and implement a new architecture to link my callback with the call.

Olivier Grégoire: Fifth week at GSoC: push information from the daemon!

29 June, 2016 - 23:57

*Last week I worked to create a window for the gnome client to display information.*

This week I worked on linking the D-BUS with the gnome client.
To do that I needed to modify the LRC.
-Create a QT slot to catch the signal from the D-BUS
-Create a signal connect with a lambda function on the client

Unfortunately, I can only push a single variable at the time. So, I chose to use MAP to contain all my information. After changing this type in the daemon, D-Bus, LRC and the gnome client. Everything finally work!

Wouter Verhelst: Debcamp NBD work

29 June, 2016 - 20:07

I had planned to do some work on NBD while here at debcamp. Here's a progress report:

Task Concept Code Tested Change init script so it uses /etc/nbdtab rather than /etc/nbd-client for configuration ✓ ✓ ✗ Change postinst so it converts existing /etc/nbd-client files to /etc/nbdtab ✓ ✗ ✗ Change postinst so it generates /etc/nbdtab files from debconf ✗ ✗ ✗ Create systemd unit for nbd based on /etc/nbdtab ✓ ✓ ✓ Write STARTTLS support for client and/or server ✓ ✗ ✗

The first four are needed to fix Debian bug #796633, of which "writing the systemd unit" was the one that seemed hardest. The good thing about debcamp, however, is that experts are aplenty (thanks Tollef), so that part's done now.

What's left:

  • Testing the init script modifications that I've made, so as to support those users who dislike systemd. They're fairly straightforward, and I don't anticipate any problems, but it helps to make sure.
  • Migrating the /etc/nbd-client configuration file to an nbdtab(5) one. This should be fairly straightforward, it's just a matter of Writing The Code(TM).
  • Changing the whole debconf setup so it writes (and/or updates) an nbdtab(5) file rather than a /etc/nbd-client shell snippet. This falls squarely into the "OMFG what the F*** was I thinking when I wrote that debconf stuff 10 years ago" area. I'll probably deal with it somehow. I hope. Not so sure how to do so yet, though.

If I manage to get all of the above to work and there's time left, I'll have a look at implementing STARTTLS support into nbd-client and nbd-server. A spec for that exists already, there's an alternative NBD implementation which has already implemented it, and preliminary patches exist for the reference implementation, so it's known to work; I just need to spend some time slapping the pieces together and making it work.

Ah well. Good old debcamp.

Michal Čihař: PHP shapefile library

29 June, 2016 - 15:00

Since quite a long time phpMyAdmin had embedded the bfShapeFiles library for import of geospatial data. Over the time we had to apply fixes to it to stay compatible with newer PHP versions, but there was really no development. Unfortunately, as it seems to be only usable PHP library which can read and write ESRI shapefiles.

With recent switch of phpMyAdmin to dependency handling using Composer I wondered if we should get rid of the last embedded PHP library, which was this one - bfShapeFiles. As I couldn't find alive library which would work well for us, I resisted that for quite long, until pull request to improve it came in. At that point I've realized that it's probably better to separate it and start to improve it outside our codebase.

That's when phpmyadmin/shapefile was started. The code is based on bfShapeFiles, applies all fixes which were used in phpMyAdmin and adds improvements from the pull request. On top of that it has brand new testsuite (the coverage is still much lower than I'd like to have) and while writing the tests several parsing issues have been discovered and fixed. Anyway you can now get the source from GitHub or install using Composer from Packagist.

PS: While fixing parser bugs I've looked at other parsers as well to see how they handle some situations unclear in the specs and I had to fix Python pyshp on the way as well :-).

Filed under: Debian English phpMyAdmin | 0 comments

Reproducible builds folks: First steps towards getting containers working

29 June, 2016 - 06:19

Author: ceridwen

The 0.1 alpha release of reprotest has been accepted into Debian unstable and is available for install at or through apt.

I've been working on redesigning reprotest so that it runs commands through autopkgtest's adt_testbed interface. For the most part, I needed to replace explicit calls to Python standard library functions for copying files and directories with calls to adt_testbed.Testbed.command() with copyup and copydown, and to use Testbed.execute() and Testbed.check_exec() to run commands instead of subprocess.

To test reprotest on the actual containers requires having containers constructed for this purpose. autopkgtest has a test that builds a minimal chroot. I considered doing something like this approach or using BusyBox. However, I have a Python script that mocks a build process, which requires having Python available in the container, and while I looked into busybox-python and MicroPython to keep the footprint small, I decided that for now this would take too much work and decided to go straight to the autopkgtest recommendations for building containers, mk-sbuild and vmdebootstrap. (I also ended up discovering a bug in debootstrap.) This means that to get the tests run requires some manual setup at the moment. In the long run, I'd like to improve that, but it's not an immediate priority. While working on adding tests for the other containers supported by autopkgtest, I also converted to py.test so that I could use fixtures and parametrization to run the Cartesian product of each variation with each container.

With tests written, I started trying to verify that my new code worked. One problem I encountered while trying to debug was that I wasn't getting full error output. In VirtSubproc.check_exec(), execute_timeout() acts something like a Popen() call:

(status, out, err) = execute_timeout(None, timeout, real_argv,
                                     stdout=stdout, stderr=subprocess.PIPE)

if status:
    bomb("%s%s failed (exit status %d)" %
         ((downp and "(down) " or ""), argv, status))
if err:
    bomb("%s unexpectedly produced stderr output `%s'" %
         (argv, err))

The problem with this is that if the call returns a non-zero exit code, which is typical for program failures, stderr doesn't get included in the error message.

I changed the first if-block to:

if status:
    bomb("%s%s failed (exit status %d)\n%s" %
         ((downp and "(down) " or ""), argv, status, err))

Another example is that autopkgtest calls schroot with the --quiet flag, which in one case was making schroot fail without any output due to a misconfiguration. I'm still trying to find and eliminate more places where errors are silenced.

autopkgtest was designed to be installed with Debian's packaging system, which handles arbitrary files and directory layouts. Unfortunately, setuptools is completely different in a way that doesn't work well with autopkgtest's design. (I'm sure this is partly because setuptools has to support all the different major OSes that run Python, including Windows.) As I discussed last week, autopkgtest has Python scripts in virt/ that are executed by subprocess calls in adt_testbed. Because these scripts import files from lib/, there needs to be an in virt/ to make it into a package and a sys.path hack in each script to allow it to find modules in lib/. Unfortunately, setuptools will not install this structure. First, setuptools will not install any file without a .py extension into a package. Theoretically, this is fixable, the files in virt/ are Python scripts so I could rename them. (Theoretically, there's supposed to be some workaround involving or package_data in, but I have yet to find any documentation or explanation giving a method for installing non-Python files inside a Python package.) Second, however, setuptools does not preserve the executable bit when installing package files. The obvious workaround, changing the subprocess calls so that they invoke python virt/ rather than virt/ requires changing all the internal calls in the autopkgtest code, which I'm loathe to do for fear of breaking it. (It's not clear to me I can easily find all of the calls, for starters.)

There are about three solutions to this I see at the moment, all of them difficult. The first involves using either the scripts keyword or console_scripts entry point in, as explained here. The scripts keyword is supposed to preserve the executable bit according to this StackExchange question, but I haven't verified this myself, and like everything to do with setuptools I don't trust anything anyone says about it without testing it myself. It also has the disadvantage of dumping them all into the common scripts directory. Using console_scripts involves rewriting all of them to have an executable function I can refer to in I worry that this would be both fragile and break existing expectations in the rest of the autopkgtest code, but it might be the best solution. The third solution involves refactoring of all the autopkgtest code to import the code in the scripts rather than running it through subprocess calls. I'm reluctant to do this because I think it's almost certain to break things that will require significant work to fix.

Getting setuptools to install the autopkgtest code correctly is one blocker for the next release. Another is that autopkgtest's handling of errors during the build process involves closing the adt_testbed.Testbed so it won't take further commands. Unfortunately, this handling runs before any cleanup code I write to run outside it, which means that at the moment errors during the build will result in things like disorderfs being left mounted.

The last release blocker is that adt_testbed doesn't have any way to set a working directory when running commands. For instance, the virt/schroot script always calls schroot with --directory=/. I thought about trying to use absolute paths, but decided this was unintuitive and impractical. For the user, this would mean that instead of running something simple like make in the correct directory, they would have to run make --file=/absolute/path/to/Makefile or something similar, making all paths absolute. I worry that some build scripts wouldn't handle this correctly, either: for instance, running python from a different directory can have different effects because Python's path is initialized to contain the current directory. Changing this is going to require going deeper into the autopkgtest code than I'd hoped.

I intend to try to resolve these three issues over the next week and then prepare the next release, though how much progress I make depends on how thorny they turn out to be.

Jose M. Calhariz: at daemon 3.1.20, with 3 fixes

29 June, 2016 - 04:14

From the Debian BUG system I incorporated 3 fixes. One of them is experimental. It fixes a broken code but may have side effects. Please test it.

  • New release 3.1.20:
   * Add option b to getopt, (Closes: #812972).
   * Comment a possible broken code, (Closes: #818508).
   * Add a fflush to catch more errors during writes, (Closes: #801186).

Paul Wise: DebCamp16 day 5

29 June, 2016 - 02:14

Beat head against shiny cats (no animals were harmed). Discuss the spice of sillyness. Forward a wiki bounce to the person. Mention my gobby git mail cron job. Start adopting the adequate package. Discuss cats vs licensecheck with Jonas. Usual spam reporting. Review wiki RecentChanges. Whitelisted one user in the wiki anti-spam system. Finding myself longing for a web technology. Shudder and look at the twinklies.

John Goerzen: A great day for a flight with the boys

28 June, 2016 - 10:57

I tend to save up my vacation time to use in summer for family activities, and today was one of those days.

Yesterday, Jacob and Oliver enjoyed planning what they were going to do with me. They ruled out all sorts of things nearby, but they decided they would like to fly to Ponca City, explore the oil museum there, then eat at Enrique’s before flying home.

Of course, it is not particularly hard to convince me to fly somewhere. So off we went today for some great father-son time.

The weather on the way was just gorgeous. We cruised along at about a mile above ground, which gave us pleasantly cool air through the vents and a smooth ride. Out in the distance, a few clouds were trying to form.

Whether I’m flying or driving, a pilot always happy to pass a small airport. Here was the Winfield, KS airport (KWLD):

This is a beautiful time of year in Kansas. The freshly-cut wheat fields are still a vibrant yellow. Other crops make a bright green, and colors just pop from the sky. A camera can’t do it justice.

They enjoyed the museum, and then Oliver wanted to find something else to do before we returned to the airport for dinner. A little exploring yielded the beautiful and shady Garfield Park, complete with numerous old stone bridges.

Of course, the hit of any visit to Enrique’s is their “ice cream tacos” (sopapillas with ice cream). Here is Oliver polishing off his.

They had both requested sightseeing from the sky on our way back, but both fell asleep so we opted to pass on that this time. But you know, Oliver slept through the landing, and I had to wake him up when it was time to go. I always take it as a compliment when a 6-year-old sleeps through a landing!

Most small airports have a bowl of candy setting out somewhere. Jacob and Oliver have become adept at finding them, and I will usually let them “talk me into” a piece of candy at one of them. Today, after we got back, they were intent at exploring the small gift shop back home, and each bought a little toy helicopter for $1.25. They may have been too tired to enjoy it though.

They’ve been in bed for awhile now, and I’m still smiling about the day. Time goes fast when you’re having fun, and all three of us were. It is fun to see them inheriting my sense of excitement at adventure, and enjoying the world around them as they go.

The lady at the museum asked how we had heard about them, and noticed I drove up in an airport car (most small airports have an old car you can borrow for a couple hours for free if you’re a pilot). I told the story briefly, and she said, “Wow. You flew out to this small town just to spend some time?” “Yep.” “Wow, that’s really neat. I don’t think we’ve ever had a visitor like you before.” Then she turned to the boys and said, “You boys are some of the luckiest kids in the world.”

And I can’t help but feel like the luckiest dad in the world.

Jonathan McDowell: Hire me!

28 June, 2016 - 05:21

It’s rare to be in a position to be able to publicly announce you’re looking for a new job, but as the opportunity is currently available to me I feel I should take advantage of it. That’s especially true given the fact I’ll be at DebConf 16 next week and hope to be able to talk to various people who might be hiring (and will, of course, be attending the job fair).

I’m coming to the end of my Masters in Legal Science and although it’s been fascinating I’ve made the decision that I want to return to the world of tech. I like building things too much it seems. There are various people I’ve already reached out to, and more that are on my list to contact, but I figure making it more widely known that I’m in the market can’t hurt with finding the right fit.

  • Availability: August 2016 onwards. I can wait for the right opportunity, but I’ve got a dissertation to write up so can’t start any sooner.
  • Location: Preferably Belfast, Northern Ireland. I know that’s a tricky one, but I’ve done my share of moving around for the moment (note I’ve no problem with having to do travel as part of my job). While I prefer an office environment I’m perfectly able to work from home, as long as it’s as part of a team that is tooled up for disperse workers - in my experience being the only remote person rarely works well. There’s a chance I could be persuaded to move to Dublin for the right role.
  • Type of role: I sit somewhere on the software developer/technical lead/architect spectrum. I expect to get my hands dirty (it’s the only way to learn a system properly), but equally if I’m not able to be involved in making high level technical decisions then I’ll find myself frustrated.
  • Technology preferences: Flexible. My background is backend systems programming (primarily C in the storage and networking spaces), but like most developers these days I’ve had exposure to a bunch of different things and enjoy the opportunity to learn new things.

I’m on LinkedIn and OpenHUB, which should give a bit more info on my previous experience and skill set. I know I’m light on details here, so feel free to email me to talk about what I might be able to specifically bring to your organisation.

Paul Wise: DebCamp14 day 4

28 June, 2016 - 03:25

Usual spam reporting. Review wiki RecentChanges. Rain glorious rain! Err... Update a couple of links on the debtags team page. Report Debian bug #828718 against Update links to debtags on DDPO and the old PTS. Report minor Debian bug #828722 against Update the debtags for check-all-the-things. More code and check fixes for check-all-the-things. Gravitate towards the fireplace and beat face against annoying access point, learn of wpa_cli blacklist & wpa_cli bssid from owner of devilish laptop. Ask stakeholders for feedback/commits before the impending release of check-all-the-things to Debian unstable. Meet developers of the One^WGNU Ring, discuss C++ library foo. Contribute some links to an open hardware thread. Point out the location of the Debian QA SVN repository. Clear skies at night, twinkling delight.

Scarlett Clark: Debian: Reproducible builds update

28 June, 2016 - 00:58

A quick update to note that I did complete extra-cmake-modules and was given the green light to push upstream and in Debian and will do so asap.
Due to circumstances out of my control, I am moving a few states over and will have to continue my efforts when I arrive at
my new place of residence in a few days. Thanks
for understanding.


John Goerzen: I’m switching from git-annex to Syncthing

27 June, 2016 - 20:02

I wrote recently about using git-annex for encrypted sync, but due to a number of issues with it, I’ve opted to switch to Syncthing.

I’d been using git-annex with real but noncritical data. Among the first issues I noticed was occasional but persistent high CPU usage spikes, which once started, would persist apparently forever. I had an issue where git-annex tried to replace files I’d removed from its repo with broken symlinks, but the real final straw was a number of issues with the gcrypt remote repos. git-remote-gcrypt appears to have a number of issues with possible race conditions on the remote, and at least one of them somehow caused encrypted data to appear in a packfile on a remote repo. Why there was data in a packfile there, I don’t know, since git-annex is supposed to keep the data out of packfiles.

Anyhow, git-annex is still an awesome tool with a lot of use cases, but I’m concluding that live sync to an encrypted git remote isn’t quite there yet enough for me.

So I looked for alternatives. My main criteria were supporting live sync (via inotify or similar) and not requiring the files to be stored unencrypted on a remote system (my local systems all use LUKS). I found Syncthing met these requirements.

Syncthing is pretty interesting in that, like git-annex, it doesn’t require a centralized server at all. Rather, it forms basically a mesh between your devices. Its concept is somewhat similar to the proprietary Bittorrent Sync — basically, all the nodes communicate about what files and chunks of files they have, and the changes that are made, and immediately propagate as much as possible. Unlike, say, Dropbox or Owncloud, Syncthing can actually support simultaneous downloads from multiple remotes for optimum performance when there are many changes.

Combined with syncthing-inotify or syncthing-gtk, it has immediate detection of changes and therefore very quick propagation of them.

Syncthing is particularly adept at figuring out ways for the nodes to communicate with each other. It begins by broadcasting on the local network, so known nearby nodes can be found directly. The Syncthing folks also run a discovery server (though you can use your own if you prefer) that lets nodes find each other on the Internet. Syncthing will attempt to use UPnP to configure firewalls to let it out, but if that fails, the last resort is a traffic relay server — again, a number of volunteers host these online, but you can run your own if you prefer.

Each node in Syncthing has an RSA keypair, and what amounts to part of the public key is used as a globally unique node ID. The initial link between nodes is accomplished by pasting the globally unique ID from one node into the “add node” screen on the other; the user of the first node then must accept the request, and from that point on, syncing can proceed. The data is all transmitted encrypted, of course, so interception will not cause data to be revealed.

Really my only complaint about Syncthing so far is that, although it binds to localhost, the web GUI does not require authentication by default.

There is an ITP open for Syncthing in Debian, but until then, their apt repo works fine. For syncthing-gtk, the trusty version of the webupd8 PPD works in Jessie (though be sure to pin it to a low priority if you don’t want it replacing some unrelated Debian packages).

Alessio Treglia: A – not exactly United – Kingdom

27 June, 2016 - 14:54


Island of Ventotene – Roman harbour

There once was a Kingdom strongly United, built on the honours of the people of Wessex, of Mercia, Northumbria and East Anglia who knew how to deal with the invasion of the Vikings from the east and of Normans from the south, to come to unify the territory under an umbrella of common intents. Today, however, 48% of them, while keeping solid traditions, still know how to look forward to the future, joining horizons and commercial developments along with the rest of Europe. The remaining 52%, however, look back and can not see anything in front of them if not a desire of isolation, breaking the European dream born on the shores of Ventotene island in 1944 by Altiero Spinelli, Ernesto Rossi and Ursula Hirschmann through the “Manifesto for a free and united Europe“. An incurable fracture in the country was born in a referendum on 23 June, in which just over half of the population asked to terminate his marriage to the great European family, bringing the UK back by 43 years of history.

<Read More…[by Fabio Marzocca]>

Bits from Debian: DebConf16 schedule available

27 June, 2016 - 14:00

DebConf16 will be held this and next week in Cape Town, South Africa, and we're happy to announce that the schedule is already available. Of course, it is still possible for some minor changes to happen!

The DebCamp Sprints already started on 23 June 2016.

DebConf will open on Saturday, 2 July 2016 with the Open Festival, where events of interest to a wider audience are offered, ranging from topics specific to Debian to a wider appreciation of the open and maker movements (and not just IT-related). Hackers, makers, hobbyists and other interested parties are invited to share their activities with DebConf attendees and the public at the University of Cape Town, whether in form of workshops, lightning talks, install parties, art exhibition or posters. Additionally, a Job Fair will take place on Saturday, and its job wall will be available throughout DebConf.

The full schedule of the Debian Conference thorough the week is published. After the Open Festival, the conference will continue with more than 85 talks and BoFs (informal gatherings and discussions within Debian teams), including not only software development and packaging but also areas like translation, documentation, artwork, testing, specialized derivatives, maintenance of the community infrastructure, and other.

There will also be also a plethora of social events, such as our traditional cheese and wine party, our group photo and our day trip.

DebConf talks will be broadcast live on the Internet when possible, and videos of the talks will be published on the web along with the presentation slides.

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf16, particularly our Platinum Sponsor Hewlett Packard Enterprise.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise actively participates in open source. Thousands of developers across the company are focused on open source projects, and HPE sponsors and supports the open source community in a number of ways, including: contributing code, sponsoring foundations and projects, providing active leadership, and participating in various committees.

Paul Tagliamonte: Hello, Sense!

27 June, 2016 - 08:42

A while back, I saw a Kickstarter for one of the most well designed and pretty sleep trackers on the market. I fell in love with it, and it has stuck with me since.

A few months ago, I finally got my hands on one and started to track my data. Naturally, I now want to store this new data with the rest of the data I have on myself in my own databases.

I went in search of an API, but I found that the Sense API hasn't been published yet, and is being worked on by the team. Here's hoping it'll land soon!

After some subdomain guessing, I hit on So, naturally, I went to take a quick look at their Android app and network traffic, lo and behold, there was a pretty nicely designed API.

This API is clearly an internal API, and as such, it's something that should not be considered stable. However, I'm OK with a fragile API, so I've published a quick and dirty API wrapper for the Sense API to my GitHub..

I've published it because I've found it useful, but I can't promise the world, (since I'm not a member of the Sense team at Hello!), so here are a few ground rules of this wrapper:

  • I make no claims to the stability or completeness.
  • I have no documentation or assurances.
  • I will not provide the client secret and ID. You'll have to find them on your own.
  • This may stop working without any notice, and there may even be really nasty bugs that result in your alarm going off at 4 AM.
  • Send PRs! This is a side-project for me.

This module is currently Python 3 only. If someone really needs Python 2 support, I'm open to minimally invasive patches to the codebase using six to support Python 2.7.

Working with the API:

First, let's go ahead and log in using python -m sense.

$ python -m sense
Sense OAuth Client ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense OAuth Client Secret: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense email:
Sense password: 
Attempting to log into Sense's API
Attempting to query the Sense API
The humidity is **just right**.
The air quality is **just right**.
The light level is **just right**.
It's **pretty hot** in here.
The noise level is **just right**.

Now, let's see if we can pull up information on my Sense:

>>> from sense import Sense
>>> sense = Sense()
>>> sense.devices()
{'senses': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '11a1', 'last_updated': 1466991060000, 'state': 'NORMAL', 'wifi_info': {'rssi': 0, 'ssid': 'Pretty Fly for a WiFi (2.4 GhZ)', 'condition': 'GOOD', 'last_updated': 1462927722000}, 'color': 'BLACK'}], 'pills': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '2', 'last_updated': 1466990339000, 'battery_level': 87, 'color': 'BLUE', 'state': 'NORMAL'}]}

Neat! Pretty cool. Look, you can even see my WiFi AP! Let's try some more and pull some trends out.

>>> values = [x.get("value") for x in sense.room_sensors()["humidity"]][:10]
>>> min(values)
>>> max(values)

I plan to keep maintaining it as long as it's needed, so I welcome co-maintainers, and I'd love to see what people build with it! So far, I'm using it to dump my room data into InfluxDB, pulling information on my room into Grafana. Hopefully more to come!

Happy hacking!

Steinar H. Gunderson: Nageru 1.3.0 released

27 June, 2016 - 05:00

I've just released version 1.3.0 of Nageru, my live software video mixer.

Things have been a bit quiet on the Nageru front recently, for two reasons: First, I've been busy with moving (from Switzerland to Norway) and associated job change (from Google to MySQL/Oracle). Things are going well, but these kinds of changes tend to take, well, time and energy.

Second, the highlight of Nageru 1.3.0 is encoding of H.264 streams meant for end users (using x264), not just the Quick Sync Video streams from earlier versions, which work more as a near-lossless intermediate format meant for transcoding to something else later. Like with most things video, hitting such features really hard (I've been doing literally weeks of continuous stream testing) tends to expose weaknesses in upstream software.

In particular, I wanted x264 speed control, where the quality is tuned up and down live as the content dictates. This is mainly because the content I want to stream this summer (demoscene competitions) varies from the very simple to downright ridiculously complex (as you can see, YouTube just basically gives up and creates gray blocks). If you have only one static quality setting, you will have the choice between something that looks like crap for everything, and one that drops frames like crazy (or, if your encoding software isn't all that, like e.g. using ffmpeg(1) directly, just gets behind and all your clients' streams just stop) when the tricky stuff comes. There was an unofficial patch for speed control, but it was buggy, not suitable for today's hardware and not kept at all up to date with modern x264 versions. So to get speed control, I had to work that patch pretty heavily (including making it so that it could work in Nageru directly instead of requiring a patched x264)… and then it exposed a bug in x264 proper that would cause corruption when changing between some presets, and I couldn't release 1.3.0 before that fix had at least hit git.

Similarly, debugging this exposed an issue with how I did streaming with ffmpeg and the MP4 mux (which you need to be able to stream H.264 directly to HTML5 <video> without any funny and latency-inducing segmenting business); to know where keyframes started, I needed to flush the mux before each one, but this messes up interleaving, and if frames were ever dropped right in front of a keyframe (which they would on the most difficult content, even at speed control's fastest presets!), the “duration” field of the frame would be wrong, causing the timestamps to be wrong and even having pts < dts in some cases. (VLC has to deal with flushing in exactly the same way, and thus would have exactly the same issue, although VLC generally doesn't transcode variable-framerate content so well to begin with, so the heuristics would be more likely to work. Incidentally, I wrote the VLC code for this flushing back in the day, to be able to stream WebM for some Debconf.) I cannot take credit for the ffmpeg/libav fixes (that was all done by Martin Storsjö), but again, Nageru had to wait for the new API they introduce (that just signals to the application when a keyframe is about to begin, removing the need for flushing) to get into git mainline. Hopefully, both fixes will get into releases soon-ish and from there one make their way into stretch.

Apart from that, there's a bunch of fixes as always. I'm still occasionally (about once every two weeks of streaming or so) hitting what I believe is a bug in NVIDIA's proprietary OpenGL drivers, but it's nearly impossible to debug without some serious help from them, and they haven't been responding to my inquiries. Every two weeks means that you could be hitting it in a weekend's worth of streaming, so it would be nice to get it fixed, but it also means it's really really hard to make a reproducible test case. :-) But the fact that this is currently the worst stability bug (and that you can work around it by using e.g. Intel's drivers) also shows that Nageru is pretty stable these days.

Iustin Pop: Random things of the week - brexit and the pretzel

27 June, 2016 - 02:59
Random things of the week

In no particular order (mostly).

Coming back from the US, it was easier dealing with the jet-lag this time; doing sports in the morning or at noon and eating light on the evening helps a lot.

The big thing of the week, that has everybody talking, is of course brexit. My thoughts, as written before on a facebook comment: Direct democracy doesn't really work if it's done once in a blue moon. Wikipedia says there have been thirteen referendums in UK since 1975, but most of them (10) on devolution issues in individual countries, and only three were UK-wide referendums (quoting from the above page): the first on membership of the European Economic Community in 1975, the second on adopting the Alternative vote system in parliamentary elections in 2011, and the third one is the current one. Which means that a referendum is done every 13 years or so.

At this frequency, people are not a) used to inform themselves on the actual issues, b) believing that your vote actually will change things, and most likely c) not taking the "direct-democracy" aspect seriously (thinking beyond the issue at hand and how will it play together with all the rest of the political decisions). The result is what we've seen, leave politicians already backpedalling on issues, and confusion that yes, leave votes actually counted.

My prognosis for what's going to happen:

  • one option, this gets magically undone, and there will be rejoicing at the barely avoided big damage (small damage already done).
  • otherwise, UK will lose significantly from the economy point of view, enough that they'll try being out of the EU officially but "in" the EU from the point of view of trade.
  • in any case, large external companies will be very wary of investing in production in UK (e.g. Japanese car manufacturers), and some will leave.
  • most of the 52% who voted leave will realise that this was a bad outcome, in around 5 years.
  • hopefully, politicians (both in the EU and in the UK) will try to pay more attention to inequality (here I'm optimistic).

We'll see what happens though. Reading comments on various websites still make me cringe at how small some people think: "independence" from the EU when the real issue is EU versus the other big blocks—US, China, in the future India; and "versus" not necessarily in a conflict sense, but simply as negotiating power, economic treaties, etc.

Back to more down-to-earth things: this week was quite a good week for me. Including commutes, my calendar turned out quite nice:

[[!img Error: Image::Magick is not installed]]

The downside was that most of those were short runs or bike sessions. My runs are now usually 6.5K, and I'll try to keep to that for a few weeks, in order to be sure that bone and ligaments have adjusted, and hopefully keep injuries away.

On the bike front, the only significant thing was that I did as well the Zwift Canyon Ultimate Pretzel Mission, on the last day of the contest (today): 73.5Km in total, 3h:27m. I've done 60K rides on Zwift before, so the first 60K were OK, but the last ~5K were really hard. Legs felt like logs of wood, I was only pushing very weak output by the end although I did hydrate and fuel up during the ride. But, I was proud of the fact that on the last sprint (about 2K before the end of the ride), I had ~34s, compared to my all-time best of 29.2s. Was not bad after ~3h20m of riding and 1300 virtual meters of ascent. Strava also tells me I got 31 PRs on various segments, but that's because I rode on some parts of Watopia that I never rode before (mostly the reverse ones).

Overall, stats for this week: ~160Km in total (virtual and real, biking and running), ~9 hours spent doing sports. Still much lower than the amount of time I was playing computer games, so it's a net win ☺

Have a nice start of the week everyone, and keep doing what moves you forward!

Paul Wise: DebCamp16 day 3

27 June, 2016 - 02:31

Review, approve chromium, gnome-terminal and radeontop screenshots. Disgusted to see the level of creativity GPL violators have. Words of encouragement on #debian-mentors. Pleased to see Tails reproducible builds funding by Mozilla. Point out build dates in versions leads to non-reproducible builds. Point out apt-file search to someone looking for a binary of kill. Review wiki RecentChanges. Alarmingly windy. Report important Debian bug #828215 against unattended-upgrades. Clean up some code in check-all-the-things and work on fixing Debian bug #826089. Wind glorious wind! Much clearer day, nice view of the mountain. More check-all-the-things code clean up and finish up fixing Debian bug #826089. Twinkling city lights and more wind. Final code polish during dinner/discussion. Wandering in the wind amongst the twinklies. Whitelisted one user in the wiki anti-spam system. Usual spam reporting.


Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้