A fair amount of things happened since I last blogged something else than music. First of all we did actually hold a Debian Diversity meeting. It was quite nice, less people around than hoped for, and I account that to some extend to the trolls and haters that defaced the titanpad page for the agenda and destroyed the doodle entry for settling on a date for the meeting. They even tried to troll my blog with comments, and while I did approve controversial responses in the past, those went over the line of being acceptable and didn't carry any relevant content.
One response that I didn't approve but kept in my mailbox is even giving me strength to carry on. There is one sentence in it that speaks to me: Think you can stop us? You can't you stupid b*tch. You have ruined the Debian community for us. The rest of the message is of no further relevance, but even though I can't take credit for being responsible for that, I'm glad to be a perceived part of ruining the Debian community for intolerant and hateful people.
A lot of other things happened since too. Mostly locally here in Vienna, several queer empowering groups were founding around me, some of them existed already, some formed with the help of myself. We now have several great regular meetings for non-binary people, for queer polyamory people about which we gave an interview, a queer playfight (I might explain that concept another time), a polyamory discussion group, two bi-/pansexual groups, a queer-feminist choir, and there will be an European Lesbian* Conference in October where I help with the organization …
… and on June 21st I'll finally receive the keys to my flat in Que[e]rbau Seestadt. I'm sooo looking forward to it. It will be part of the Let me come Home experience that I'm currently in. Another part of that experience is that I started changing my name (and gender marker) officially. I had my first appointment in the corresponding bureau, and I hope that it won't last too long because I have to get my papers in time for booking my flight to Montreal, and somewhen along the process my current passport won't contain correct data anymore. So for the people who have it in their signing policy to see government IDs this might be your chance to finally sign my key then.
I plan to do a diversity BoF at debconf where we can speak more directly on where we want to head with the project. I hope I'll find the time to do an IRC meeting beforehand. I'm just uncertain how to coordinate that one to make it accessible for interested parties while keeping the destructive trolls out. I'm open for ideas here.
Following up on a previous post announcing the availability of a first round of AWS AMIs for stretch, I'm happy to announce the availability of a second round of images. These images address all the feedback we've received about the first round. The notable changes include:
- Don't install a local MTA.
- Don't install busybox.
- Ensure that /etc/machine-id is recreated at launch.
- Fix the security.debian.org sources.list entry.
- Enable Enhanced Networking and ENA support.
- Images are owned by the official debian.org AWS account, rather than my personal account.
AMI details are listed on the wiki. As usual, you're encouraged to submit feedback to the cloud team via the cloud.debian.org BTS pseudopackage, the debian-cloud mailing list, or #debian-cloud on irc.
Time for a new release of Rblpapi -- version 0.3.6 is now on CRAN. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).
This is the seventh release since the package first appeared on CRAN last year. This release brings a very nice new function lookupSecurity() contributed by Kevin Jin as well as a number of small fixes and enhancements. Details below:Changes in Rblpapi version 0.3.6 (2017-04-20)
bdp documentation has another ovveride example
Added file init.c with calls to R_registerRoutines() and R_useDynamicSymbols(); also use .registration=TRUE in useDynLib in NAMESPACE (Dirk in #220)
getBars and getTicks can now return data.table objects (Dirk in #221)
bds has improved internal protect logic via Rcpp::Shield (Dirk in #222)
Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.
New package! And, as it happens, a effectively a subset or variant of one my oldest packages, RQuantLib.
Fairly recently, Peter Caspers started to put together a header-only subset of QuantLib. He called this Quantuccia, and, upon me asking, said that it stands for "little sister" of QuantLib. Very nice.
One design goal is to keep Quantuccia header-only. This makes distribution and deployment much easier. In the fifteen years that we have worked with QuantLib by providing the R bindings via RQuantLib, it has always been a concern to provide current QuantLib libraries on all required operating systems. Many people helped over the years but it is still an issue, and e.g. right now we have no Windows package as there is no library build it against.
So what can it do right now? We started with calendaring, and you can compute date pertaining to different (ISDA and other) business day conventions, and compute holiday schedules. Here is one example computing inter alia under the NYSE holiday schedule common for US equity and futures markets:
R> library(RcppQuantuccia) R> fromD <- as.Date("2017-01-01") R> toD <- as.Date("2017-12-31") R> getHolidays(fromD, toD) # default calender ie TARGET  "2017-04-14" "2017-04-17" "2017-05-01" "2017-12-25" "2017-12-26" R> setCalendar("UnitedStates") R> getHolidays(fromD, toD) # US aka US::Settlement  "2017-01-02" "2017-01-16" "2017-02-20" "2017-05-29" "2017-07-04" "2017-09-04"  "2017-10-09" "2017-11-10" "2017-11-23" "2017-12-25" R> setCalendar("UnitedStates::NYSE") R> getHolidays(fromD, toD) # US New York Stock Exchange  "2017-01-02" "2017-01-16" "2017-02-20" "2017-04-14" "2017-05-29" "2017-07-04"  "2017-09-04" "2017-11-23" "2017-12-25" R>
The GitHub repo already has a few more calendars, and more are expected. Help is of course welcome for both this, and for porting over actual quantitative finance calculations.
Preparations for the release of TeX Live 2017 have started a few days ago with the freeze of updates in TeX Live 2016 and the announcement of the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs.
Notable changes are listed on the pretest page, I only want to report about the changes in the core infrastructure: changes in the user/sys mode of fmtutil and updmap, and introduction of the tlmgr shell.User/sys mode of fmtutil and updmap
We (both at TeX Live and Debian) regularly got error reports about fonts not being found or formats not updated etc. The reason for all this was unmistakably the same: The user has called updmap or fmtutil without the -sys option, thus creating a copy of set of configuration files under his home directory, shadowing all later updates on the system side.
Reason for this behavior is the wide-spread misinformation (outdated information) on the internet suggesting to call updmap.
To counteract this, we have changed the behavior so that both updmap and fmtutil now accept a new argument -user (in addition to the already present -sys), and rejects to run when called without either of it given, giving a warning and linking to an explanation page. This page provides more detailed documentation, and best practice examples.tlmgr shell
The TeX Live Manager got a new `shell’ mode, invoked by tlmgr shell. Details need to be flashed out, but in principle it is possible to use get and set to query and set some of the options normally passed via command lines, and use all the actions as defined in the documentation. The advantage of this is that it is not necessary to load the tlpdb for each invocation. Here a short example:
[~] tlmgr shell protocol 1 tlmgr> load local OK tlmgr> load remote tlmgr: package repository /home/norbert/public_html/tlpretest (verified) OK tlmgr> update --list tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups update: bchart [147k]: local: 27496, source: 43928 ... update: xindy [535k]: local: 43873, source: 43934 OK tlmgr> update --all tlmgr: saving backups to /home/norbert/tl/2017/tlpkg/backups [ 1/22, ??:??/??:??] update: bchart [147k] (27496 -> 43928) ... done [ 2/22, 00:00/00:00] update: biber [1041k] (43873 -> 43910) ... done ... [22/22, 00:50/00:50] update: xindy [535k] (43873 -> 43934) ... done running mktexlsr ... done running mktexlsr. ... OK tlmgr> quit tlmgr: package log updated: /home/norbert/tl/2017/texmf-var/web2c/tlmgr.log [~]
Please test and report bugs to our mailing list.
Here's what happened in the Reproducible Builds effort between Sunday April 9 and Saturday April 15 2017:Upcoming events
On April 26th Chris Lamb will give a talk at foss-north 2017 in Gothenburg, Sweden on Reproducible Builds.Media coverage
Jake Edge wrote a summary of Vagrant Cascadian's talk on Reproducible Builds at LibrePlanet.Toolchain development and fixes
Ximin Luo forwarded patches to GCC for BUILD_PATH_PREFIX_MAP support.
With this patch to backported to GCC-6, as well as a patched dpkg to set the environment variable, he scheduled ~3,300 packages that are unreproducible in unstable-amd64 but reproducible in testing-amd64 - because we vary the build path in the former but not latter case. Our infrastructure ran these in just under 3 days, and we reproduced ~1,700 extra packages.
This is about 6.5% of ~26,100 Debian source packages, and about 1/2 of the ones whose irreproducibility is due to build-path issues. Most of the rest are not related to GCC, such as things built by R, OCaml, Erlang, LLVM, PDF IDs, etc.
(The dip afterwards, in the graph linked above, is due to reverting back to an unpatched GCC-6, but we'll be rebasing the patch continually over the next few weeks so the graph should stay up.)Packages reviewed and fixed, and bugs filed
- #860200 filed against poti, forwarded upstream.
- #860201 filed against sunpinyin, forwarded upstream.
- #860203 filed against avifile.
- #860211 filed against qtractor.
- #860212 filed against samplv1.
- #860213 filed against drumkv1.
- #860214 filed against synthv1.
- #860218 filed against templayer.
- #860266 filed against miniupnpd, forwarded upstream.
- #860275 filed against msp430mcu.
- #860277 filed against g2clib.
- #860278 filed against openigtlink.
- #860279 filed against xmlrpc-c.
- #860372 filed against hp-search-mac.
- #860373 filed against foxeye.
- #860374 filed against python-taskflow.
- #860384 filed against polygen.
38 package reviews have been added, 111 have been updated and 85 have been removed in this week, adding to our knowledge about identified issues.
6 issue types have been updated:
- randomness_in_gcj_output: gcj is deprecated/dead
- records_build_flags, captures_build_path: we temporarily consider these non-deterministic, to better track the issue - the patches are still pending and statuses will keep changing as we upload patched packages.
- locale_in_documentation_generated_by_javadoc: seems to be fixed for every non-FTBFS package that it was affected by.
Development continued in git on the experimental branch:
- Don't crash on invalid archives (#833697)
- Tidy up some other code
During our reproducibility testing, FTBFS bugs have been detected and reported by:
- Chris Lamb (3)
- Chris West (1)
This week's edition was written by Ximin Luo, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.
This year I decided to participate in Midburn, the Israeli version of burning man. Whiling thinking of doing something different from my usual habit, I found myself with volunteering in the midburn IT department and getting a task to make it an open source project. Back into my comfort zone, while trying to escape it.
I found a community of volunteers from the Israeli high tech scene who work together for building the infrastructure for Midburn. In many ways, it’s already an open source community by the way it works. One critical and formal fact was lacking, and that’s the license for the code. After some discussion we decided on using Apache License 2.0 and I started the process of changing the license, taking it seriously, making sure it goes “by the rules”.
Our code is available on GitHub at https://github.com/Midburn/. And while it still need to be more tidy, I prefer the release early and often approach. The main idea we want to bring to the Burn infrastructure is using Spark as a database and have already began talking with parallel teams of other burn events. I’ll follow up on our technological agenda / vision. In the mean while, you are more than welcome to comment on the code or join one of the teams (e.g. volunteers module to organize who does which shift during the event).
Filed under: Israeli Community
I often need to convert signals from HDMI to SDI (and occasionally back). This requires a box of some sort, and eBay obliges; there's a bunch of different sellers of the same devices, selling them around $20–25. They don't seem to have a brand name, but they are invariably sold as 3G-SDI converters (meaning they should go up to 1080p60) and look like this:
There are also corresponding SDI-to-HDMI converters that look pretty much the same except they convert the other way. (They're easy to confuse, but that's not a problem unique tothem.)
I've used them for a while now, and there are pros and cons. They seem reliable enough, and they're 1/4th the price of e.g. Blackmagic's Micro converters, which is a real bargain. However, there are also some issues:
- For 3G-SDI, they output level A only, with no option for level B. (In fact, there are no options at all.) Level A is the most sane standard, and also what most equipment uses, but there exists certain older equipment that only works with level B.
- They don't have reclocking chips, so their timing accuracy is not superb. I managed to borrow a Phabrix SDI analyzer and measured the jitter; with a very short cable, I got approximately 0.85–0.95 UI (varying a bit), whereas a Blackmagic converter gave me 0.23–0.24 UI (much more stable). This may be a problem at very long cable lengths, although I haven't tried 100m runs and such.
- When converting to and from RGB, they seem to assume Rec. 601 Y'CbCr coefficients even for HD resolutions. This means the colors will be a bit off in some cases, although for most people, it will be hard to notice without looking at it side-by-side. (I believe the HDMI-to-SDI and SDI-to-HDMI converters make the same mistake, so that the errors cancel out if you just want to use a pair as HDMI extenders. Also, if your signal is Y'CbCr already, you don't need to care.)
- They don't insert SMPTE 352M payload ID. (Supposedly, this is because the SDI chip they use, called GV7600, is slightly out-of-standard on purpose in order to avoid paying expensive licensing fees to SMPTE.) Normally, you wouldn't need to care, but 3G-SDI actually requires this, and worse; Blackmagic's equipment (at least the Duo 2, and I've seen reports about the ATEMs as well) enforces that. If you try to run e.g. 1080p50 through them and into a Duo 2, it will be misdetected as “1080p25, no signal”. There's no workaround that I know of.
The last issue is by far the worst, but it only affects 3G-SDI resolutions. 720p60, 1080p30 and 1080i60 all work fine. And to be fair, not even Blackmagic's own converters actually send 352M correctly most of the time…
I wish there were a way I could publish this somewhere people would actually read it before buying these things, but without a name, it's hard for people to find it. They're great value for money, and I wouldn't hesitate to recommend them for almost all use… but then, there's that almost. :-)
~ make-theme-image gnome-icon-theme moblin-icon-theme
This is quite useful to get a good idea of the icons available in a package. You can select the icons you want to display using the -w option. The following command should provide you with a decent overview of the icon themes present in Debian:
apt search -- -icon-theme | grep / | cut -d/ -f1 | xargs make-theme-image
I hope you find it useful ! In any case, it's on github, so feel free to patch and share.
I've heard about 3d-printing a lot in the past, although the hype seems to have mostly died down. My view has always been "That seems cool", coupled with "Everybody says making the models is very hard", and "the process itself is fiddly & time-consuming".
I've been sporadically working on a project for a few months now which displays tram-departure times, this is part of my drive to "hardware" things with Arduino/ESP8266 devices . Most visitors to our flat have commented on it, at least once, and over time it has become gradually more and more user-friendly. Initially it was just a toy-project for myself, so everything was hard-coded in the source but over time that changed - which I mentioned here, (specifically the Access-point setup):
- When it boots up, unconfigured, it starts as an access-point.
- So you can connect and configure the WiFi network it should join.
- Once it's up and running you can point a web-browser at it.
- This lets you toggle the backlight, change the timezone, and the tram-stop.
- These values are persisted to flash so reboots will remember everything.
I've now wired up an input-button to the device too, experimenting with the different ways that a single button can carry out multiple actions:
- Press & release - toggle the backlight.
- Press & release twice - a double-click if you like - show a message.
- Press, hold for 1 second, then release - re-sync the date/time & tram-data.
Anyway the software is neat, and I can't think of anything obvious to change. So lets move onto the real topic of this post: 3D Printing.
I randomly remembered that I'd heard about an online site holding 3D-models, and on a whim I searched for "4x20 LCD". That lead me to this design, which is exactly what I was looking for. Just like open-source software we're now living in a world where you can get open-source hardware! How cool is that?
I had to trust the dimensions of the model, and obviously I was going to mount my new button into the box, rather than the knob shown. But having a model was great. I could download it, for free, and I could view it online at viewstl.com.
But with a model obtained the next step was getting it printed. I found a bunch of commercial companies, here in Europe, who would print a model, and ship it to me, but when I uploaded the model they priced it at €90+. Too much. I'd almost lost interest when I stumbled across a site which provides a gateway into a series of individual/companies who will print things for you, on-demand: 3dhubs.
Once again I uploaded my model, and this time I was able to select a guy in the same city as me. He printed my model for 1/3-1/4 of the price of the companies I'd found, and sent me fun pictures of the object while it was in the process of being printed.
To recap I started like this:
Then I boxed it in cardboard which looked better than nothing, but still not terribly great:
Now I've found an online case-design for free, got it printed cheaply by a volunteer (feels like the wrong word, after-all I did pay him), and I have something which look significantly more professional:
Inside it looks as neat as you would expect:
Of course the case still cost 5 times as much as the actual hardware involved (button: €0.05, processor-board €2.00 and LCD I2C display €3.00). But I've gone from being somebody who had zero experience with hardware-based projects 4 months ago, to somebody who has built a project which is functional and "pretty".
The internet really is a glorious thing. Using it for learning, and coding is good, using it for building actual physical parts too? That's something I never could have predicted a few years ago and I can see myself doing it more in the future.
Sure the case is a little rough around the edges, but I suspect it is now only a matter of time until I learn how to design my own models. An obvious extension is to add a status-LED above the switch, for example. How hard can it be to add a new hole to a model? (Hell I could just drill it!)
A nice little game, Firewatch, puts you into a fire watch tower in Wyoming, with only a walkie-talkie connecting him to his supervisor Delilah. A so called “first person mystery adventure” with very nice graphics and great atmosphere.
Starting with your trip to the watch tower, the game sends the player into a series of “missions”, during which more and more clues about a mystery disappearance are revealed. The game development is rather straight forward, one has hardly any choices, and it is practically impossible to miss something or fail in some way.
The big plus of the game is the great atmosphere, the funny dialogues with Delilah, the story that pulls you into the game, and the development of the character(s). The tower, the cave, all the places one visits are delicately designed with lots of personality, making this a very human like game.
What is weak is the finish. During the game I was always thinking about whether I should tell Delilah everything, or keep some things secret. But in the end nothing matters, all ends with a simple escape in the helicopter and without any tying up the loose ends. Somehow a pity for such a beautiful game to leave the player somehow unsatisfied at the end.
But although the finish wasn’t that good, I still enjoyed it more than I expected. Due to the simple flow it wont keep you busy for many hours, but as a short break a few evenings (for me), it was a nice break from all the riddle games I love so much.
The DebConf team would like to call for proposals for the DebConf17 Open Day, a whole day dedicated to sessions about Debian and Free Software, and aimed at the general public. Open Day will preceed DebConf17 and will be held in Montreal, Canada, on August 5th 2017.
DebConf Open Day will be a great opportunity for users, developers and people simply curious about our work to meet and learn about the Debian Project, Free Software in general and related topics.Submit your proposal
We welcome submissions of workshops, presentations or any other activity which involves Debian and Free Software. Activities in both English and French are accepted.
Here are some ideas about content we'd love to offer during Open Day. This list is not exhaustive, feel free to propose other ideas!
- An introduction to various aspects of the Debian Project
- Talks about Debian and Free Software in art, education and/or research
- A primer on contributing to Free Software projects
- Free software & Privacy/Surveillance
- An introduction to programming and/or hardware tinkering
- A workshop about your favorite piece of Free Software
- A presentation about your favorite Free Software-related project (user group, advocacy group, etc.)
To submit your proposal, please fill the form at https://debconf17.debconf.org/talks/new/Volunteer
We need volunteers to help ensure Open Day is a success! We are specifically looking for people familiar with the Debian installer to attend the Debian installfest, as resources for people seeking help to install Debian on their devices. If you're interested, please add your name to our wiki: https://wiki.debconf.org/wiki/DebConf17/OpenDay#InstallfestAttend
Participation to Open Day is free and no registration is required.
The schedule for Open Day will be announced in June 2017.
On my quest to generate reproducible standalone binaries for GNU FreeDink, I met new friends but currently lie defeated by an unexpected enemy...
- compiler version needs to be identical and recorded
- build options and their order need to be identical and recorder
- build path needs to be identical and recorded (otherwise debug symbols - and BuildIDs - change)
- diffoscope helps checking for differences in build output
- use -Wl,--no-insert-timestamp for .exe (with old binutils 2.25 caveat)
- no need to set a build path for stripped .exe (no ELF BuildID)
- reprotest helps checking build variations automatically
- MXE stack is apparently deterministic enough for a reproducible static build
- umask needs to be identical and recorded
- file timestamps needs to be set and recorded (more on this in a future episode)
First, the random build differences when using -Wl,--no-insert-timestamp were explained.
peanalysis shows random build dates:
$ reprotest 'i686-w64-mingw32.static-gcc hello.c -I /opt/mxe/usr/i686-w64-mingw32.static/include -I/opt/mxe/usr/i686-w64-mingw32.static/include/SDL2 -L/opt/mxe/usr/i686-w64-mingw32.static/lib -lmingw32 -Dmain=SDL_main -lSDL2main -lSDL2 -lSDL2main -Wl,--no-insert-timestamp -luser32 -lgdi32 -lwinmm -limm32 -lole32 -loleaut32 -lshell32 -lversion -o hello && chmod 700 hello && analysePE.py hello | tee /tmp/hello.log-$(date +%s); sleep 1' 'hello' $ diff -au /tmp/hello.log-1* --- /tmp/hello.log-1490950327 2017-03-31 10:52:07.788616930 +0200 +++ /tmp/hello.log-1523203509 2017-03-31 10:52:09.064633539 +0200 @@ -18,7 +18,7 @@ found PE header (size: 20) machine: i386 number of sections: 17 - timedatestamp: -1198218512 (Tue Jan 12 05:31:28 1932) + timedatestamp: 632430928 (Tue Jan 16 09:15:28 1990) pointer to symbol table: 4593152 (0x461600) number of symbols: 11581 (0x2d3d) size of optional header: 224 @@ -47,7 +47,7 @@ Win32VersionValue: 0 size of image (memory): 4640768 size of headers (offset to first section raw data): 1536 - checksum (for drivers): 4927867 + checksum (for drivers): 4922616 subsystem: 3 win32 console binary DllCharacteristics: 0
These patches fix the variation and were submitted to MXE (pull request).
Next was playing with compiler support for SOURCE_DATE_EPOCH (which e.g. sets __DATE__ macros).
The FreeDink DFArc frontend historically displays a build date in the About box:
"Build Date: %s\n", ..., __TDATE__
sadly support is only landing upstream in GCC 7 :/
I had to remove that date.
Now comes the challenging parts.
All my tests with reprotest checked. I started writing a reproducible build environment based on Docker (git browse).
At first I could not run reprotest in the container, so I reworked it with SSH support, and reprotest validated determinism.
(I also generate a reproducible .zip archive, more on that later.)
So far so good, but were the release identical when running reprotest successively on the different environments?
(reminder: this is a .exe build that is insensitive to varying path, hence consistent in a full reprotest)
$ sha256sum *.zip 189d0ca5240374896c6ecc6dfcca00905ae60797ab48abce2162fa36568e7cf1 freedink-109.0-bin-buildsh.zip e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9 freedink-109.0-bin-docker.zip e182406b4f4d7c3a4d239eee126134ba5c0304bbaa4af3de15fd4f8bda5634a9 freedink-109.0-bin-reprotest-docker.zip 37007f6ee043d9479d8c48ea0a861ae1d79fb234cd05920a25bb3db704828ece freedink-109.0-bin-reprotest-null.zip
Ouch! Even though both the Docker and my host are running Stretch, there are differences.
For the two host builds (direct and reprotest), there is a subtle but simple difference: HOME.
HOME is invariably non-existant in reprotest, while my normal compilation environment has an existing home (duh!).
This caused a subtle bug when cross-compiling with mingw and wine-binfmt:
- existing home: ./configure attempts to run conftest.exe, wine can create ~/.wine, conftest.exe runs with binfmt emulation, configure assumes:
checking whether we are cross compiling... no
- non-existing home: ./configure attempts to run conftest.exe, wine can't create ~/.wine, conftest.exe fails, configure assumes:
checking whether we are cross compiling... yes
The respective binaries were very different notably due to a different config.h.
This can be fixed by specifying --build in addition to --host when calling ./configure.
I suggested reprotest have one of the tests with a valid HOME (#860428).
Now comes the big one, after the fix I still got:
$ sha256sum *.zip 3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd freedink-109.0-bin-buildsh.zip cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb freedink-109.0-bin-docker.zip 3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd freedink-109.0-bin-reprotest-chroot.zip cc50ec1a38598d143650bdff66904438f0f5c1d3e2bea0219b749be2dcd2c3eb freedink-109.0-bin-reprotest-docker.zip 3545270ef6eaa997640cb62d66dc610a328ce0e7d412f24c8f18fdc7445907fd freedink-109.0-bin-reprotest-null.zip
There is consistency on my host, and consistency within docker, but both are different.
Moreover, all the .o files were identical, so something must have gone wrong when compiling the libs, that is MXE.
After many checks it appears that libstdc++.a is different.
Just overwriting it gets me a consistent FreeDink release on all environments.
Still, when rebuilding it (make gcc), libstdc++.a always has the same environment-dependent checksum.
45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a # host 6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4 /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a # docker
The 2 libraries are much different, there's barely any blue in hexcompare.
Well before jumping to conclusion let's mix & match.
- First I rsync a copy of my Docker filesystem and run it in a host chroot with a reset environment.
$ sudo env -i /usr/sbin/chroot chroot-docker/ $ exec bash -l $ cd /opt/mxe $ touch src/gcc.mk $ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4 /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a $ make gcc [build] gcc i686-w64-mingw32.static [done] gcc i686-w64-mingw32.static 2709464 KiB 7m2.039s $ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a # consistent with host builds
- Then I import my previous reprotest chroot (plain debootstrap) in Docker:
$ sudo tar -C chroot -c . | docker import - chroot-debootstrap $ docker run -ti chroot-debootstrap /bin/bash $ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 45f8c5d50a68aa9919ee3602a4e3f5b2bd0333bc8d781d7852b2b6121c8ba27b /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a $ touch src/gcc.mk $ make gcc [build] gcc i686-w64-mingw32.static [done] gcc i686-w64-mingw32.static 2709412 KiB 7m6.608s $ sha256sum /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a 6870b84f8e17aec4b5cf23cfe9c2e87e40d9cf59772a92707152361b6ebc1eb4 /opt/mxe/usr/lib/gcc/i686-w64-mingw32.static/5.4.0/libstdc++.a # consistent with docker builds
So, AFAICS when building with:
- exactly the same kernel
- exactly the same GCC sources
- exactly the same host binaries
then depending on whether running in a container or not we get a consistent but different libstdc++.a.
This kind of issue is not detected with a simple reprotest build, as it only tests variations within a fixed build environment.
This is quite worrisome, I intend to use a container to control my build environment, but I can't guarantee that the container technology will be exactly the same 5 years from now.
All my setup is simple and available for inspection at https://git.savannah.gnu.org/cgit/freedink.git/tree/autobuild/freedink-w32-snapshot/.
I'd very much welcome enlightenment
Ok, I have been silent about systemd and its being forced onto us in Debian like force-feeding Foie gras gooses. I have complained about systemd a few times (here and here), but what I read today really made me loose my last drips of trust I had in this monster-piece of software.
If you are up for some really surprising read about the main figure behind systemd, enjoy this github issue. It’s about a bug that simply does the equivalent of rm -rf / in some cases. The OP gave clear indications, the bug was fixes immediately, but then a comment from the God Poettering himself appeared that made the case drip over:
I am not sure I’d consider this much of a problem. Yeah, it’s a UNIX pitfall, but “rm -rf /foo/.*” will work the exact same way, no?Lennart Poettering, systemd issue 5644
Well, no, a total of 1min would have shown him that this is not the case. But we trust this guy the whole management of the init process, servers, logs (and soon our toilet and fridge management, X, DNS, whatever you ask for).
There are two issues here: One is that such a bug is lurking in systemd since probably years. The reason is simple – we pay with these kinds of bugs for the incredible complexity increase of the init process which takes over too much services. Referring back to the Turing Award lecture given by Hoare, we see that systemd took the later path:
I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies. Antony Hoare, Turing Award Lecture 1980
The other issue is how systemd developers deal with bug reports. I have reported several cases here, this is just another one: Close the issue for comments, shut up, put it under the carpet.
(Image credit: The musings of an Indian Faust)
March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn’t a lot of spare time.Debian
- Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze).
- Uploaded the latest version of abcmidi (also to experimental).
- Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle.
- Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it.
- Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta.
- Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won’t last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program.
- Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself.
- Measured up the new model railway layout and documented it in xtrkcad.
- Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook.
- Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break).
- Started looking at creating a Django website to store and publish my One Name Study sources (indexes). Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software.
For the Debian Stretch release:
- Keep an eye on the Release Critical bugs list, and see if I can help fix any. – In Progress
- Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. – In Progress
- Begin working again on all the new stuff I want packaged in Debian.
- Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. – Started
- Start testing & bug triaging Ubuntu Studio packages. – In progress
- Test Len’s work on ubuntustudio-controls – Done
- Do the Ubuntu Studio Zesty 17.04 Final Beta release. – Done
- Sort out the Blueprints for the coming Ubuntu Studio 17.10 release cycle.
- Give JMRI a good try out and look at what it would take to package it. – In progress
- Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software – fun!). – In progress
Last year I blogged about blacklisting a video driver so that KVM virtual machines didn’t go into graphics mode . Now I’ve been working on some other things to make virtual machines run better.
I use the same initramfs for the physical hardware as for the virtual machines. So I need to remove modules that are needed for booting the physical hardware from the VMs as well as other modules that get dragged in by systemd and other things. One significant saving from this is that I use BTRFS for the physical machine and the BTRFS driver takes 1M of RAM!
The first thing I did to reduce the number of modules was to edit /etc/initramfs-tools/initramfs.conf and change “MODULES=most” to “MODULES=dep”. This significantly reduced the number of modules loaded and also stopped the initramfs from probing for a non-existant floppy drive which added about 20 seconds to the boot. Note that this will result in your initramfs not supporting different hardware. So if you plan to take a hard drive out of your desktop PC and install it in another PC this could be bad for you, but for servers it’s OK as that sort of upgrade is uncommon for servers and only done with some planning (such as creating an initramfs just for the migration).
I put the following rmmod commands in /etc/rc.local to remove modules that are automatically loaded:
In /etc/modprobe.d/blacklist.conf I have the following lines to stop drivers being loaded. The first line is to stop the video mode being set and the rest are just to save space. One thing that inspired me to do this is that the parallel port driver gave a kernel error when it loaded and tried to access non-existant hardware.
On the physical machine I have the following in /etc/modprobe.d/blacklist.conf. Most of this is to prevent loading of filesystem drivers when making an initramfs. I do this because I know there’s never going to be any need for CDs, parallel devices, graphics, or strange block devices in a server room. I wouldn’t do any of this for a desktop workstation or laptop.
Calibre is the prime open source e-book management program, but the Debian releases often lag behind the official releases. Furthermore, the Debian packages remove support for rar packed e-books, which means that several comic book formats cannot be handled.
Thus, I have published a local repository targeting Debian/sid of calibre with binaries for amd64 where rar is enabled and as far as possible the latest version is included.
deb http://www.preining.info/debian/ calibre main deb-src http://www.preining.info/debian/ calibre main
The releases are signed with my Debian key 0x6CACA448860CDC13
The Debian Project Leader elections finished yesterday and the winner is Chris Lamb!
Of a total of 1062 developers, 322 developers voted using the Condorcet method.
More information about the result is available in the Debian Project Leader Elections 2017 page.
The current Debian Project Leader, Mehdi Dogguy, congratulated Chris Lamb in his Final bits from the (outgoing) DPL message. Thanks, Mehdi, for the service as DPL during this last twelve months!
The new term for the project leader starts on April 17th and expires on April 16th 2018.
I would also like to thank Mehdi for his tireless service and wish him all the best for the future. It is an honour to be elected as the DPL and I am humbled that you would place your faith and trust in me.
Moments ago Rcpp passed a big milestone as there are now 1000 packages on CRAN depending on it (as measured by Depends, Imports and LinkingTo, but excluding Suggests). The graph is on the left depicts the growth of Rcpp usage over time.
One easy way to compute such reverse dependency counts is the tools::dependsOnPkgs() function that was just mentioned in yesterday's R^4 blog post. Another way is to use the reverse_dependencies_with_maintainers() function from this helper scripts file on CRAN. Lastly, devtools has a function revdep() but it has the wrong default parameters as it includes Suggests: which you'd have to override to get the count I use here (it currently gets 1012 in this wider measure).
Rcpp cleared 300 packages in November 2014. It passed 400 packages in June 2015 (when I only tweeted about it), 500 packages in late October 2015, 600 packages last March, 700 packages last July, 800 packages last October and 900 packages early January. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.
Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent last summer, and nine percent mid-December 2016. Ten percent is next; we may get there during the summer.
1000 user packages is a really large number. This puts a whole lot of responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.
And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.