Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.
The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.
Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.
Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.
Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?
As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.Countdown to Looking Glass
On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.The Omen
Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?
What happened in the Reproducible Builds effort between Sunday January 8 and Saturday January 14 2017:Upcoming Events
The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd
Dennis Gilmore and Holger Levsen will present about "Reproducible Builds and Fedora" at Devconf.cz on February, 27th.
Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th
Reproducible Builds have been mentioned in the FSF high-priority project list.
Bernhard M. Wiedemann did some more work on reproducibility for openSUSE.
Bootstrappable.org (unfortunately no HTTPS yet) was launched after the initial work was started at our recent summit in Berlin. This is another topic related to reproducible builds and both will be needed in order to perform "Diverse Double Compilation" in practice in the future.Toolchain development and fixes
Ximin Luo researched data formats for SOURCE_PREFIX_MAP and explored different options for encoding a map data structure in a single environment variable. He also continued to talk with the rustc team on the topic.
Daniel Shahaf filed #851225 ('udd: patches: index by DEP-3 "Forwarded" status') to make it easier to track our patches.
13 package reviews have been added and 13 have been removed in this week, adding to our knowledge about identified issues.
1 issue type has been added:
During our reproducibility testing, the following FTBFS bugs have been detected and reported by:
- Chris Lamb (3)
- Lucas Nussbaum (11)
- Nicola Corna (1)
Many bugs were opened in diffoscope during the past few weeks, which probably is a good sign as it shows that diffoscope is much more widely used than a year ago. We have been working hard to squash many of them in time for Debian stable, though we will see how that goes in the end…
- Mattia Rizzolo:
- Code quality and style improvements.
- Maria Glukhova:
- Chris Lamb:
- Correctly escape value of href="" elements (re. #849411)
- Support comparing .ico files using img2txt (Closes: #850730) and fixes and extra tests in subsequent commits.
- comparators.utils.file: Include magic file type when we know the file format but can't find file-specific details. (Closes: #850850)
- And other code quality and style improvements.
- Daniel Shahaf:
- Holger Levsen:
Ximin Luo and Holger Levsen worked on stricter tests to check that /dev/shm and /run/shm are both mounted with the correct permissions. Some of our build machines currently still fail this test, and the problem is probably the root cause of the FTBFS of some packages (which fails with issues regarding sem_open). The proper fix is still being discussed in #851427.
Valerie Young worked on creating and linking autogenerated schema documentation for our database used to store the results.
Holger Levsen added a graph with diffoscope crashes and timeouts.
Holger also further improved the daily mail notifications about problems.
This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.
With the release of Gitlab 8.15 it was announced that PostgreSQL needs to be upgraded. As I migrated from a source installation I used to have an external PostgreSQL database instead of using the one shiped with the omnibus package.
So I decided to do the data migration into the omnibus PostgreSQL database now which I skipped before.
Let's have a look into the databases:
$ sudo -u postgres psql -d template1 psql (9.2.18) Type "help" for help. gitlabhq_production=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------------------+-------------------+----------+---------+---------+--------------------------------- gitlabhq_production | git | UTF8 | C.UTF-8 | C.UTF-8 | gitlab_mattermost | git | UTF8 | C.UTF-8 | C.UTF-8 | gitlabhq_production=# \q
Dumping the databases and stop PostgreSQL. Maybe you need to adjust database names and users for your needs.
$ su postgres -c "pg_dump gitlabhq_production -f /tmp/gitlabhq_production.sql" && \ su postgres -c "pg_dump gitlab_mattermost -f /tmp/gitlab_mattermost.sql" && \ /etc/init.d/postgresql stop
Activate PostgreSQL shipped with Gitlab Omnibus
$ sed -i "s/^postgresql\['enable'\] = false/#postgresql\['enable'\] = false/g" /etc/gitlab/gitlab.rb && \ sed -i "s/^#mattermost\['enable'\] = true/mattermost\['enable'\] = true/" /etc/gitlab/gitlab.rb && \ gitlab-ctl reconfigure
Testing if the connection to the databases works
$ su - git -c "psql --username=gitlab --dbname=gitlabhq_production --host=/var/opt/gitlab/postgresql/" psql (9.2.18) Type "help" for help. gitlabhq_production=# \q $ su - git -c "psql --username=gitlab --dbname=mattermost_production --host=/var/opt/gitlab/postgresql/" psql (9.2.18) Type "help" for help. mattermost_production=# \q
Ensure pg_trgm extension is enabled
$ sudo gitlab-psql -d gitlabhq_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";' $ sudo gitlab-psql -d mattermost_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'
Adjust permissions in the database dumps. Indeed please verify that users and databases might need to be adjusted too.
$ sed -i "s/OWNER TO git;/OWNER TO gitlab;/" /tmp/gitlabhq_production.sql && \ sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlabhq_production.sql $ sed -i "s/OWNER TO git;/OWNER TO gitlab_mattermost;/" /tmp/gitlab_mattermost.sql && \ sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlab_mattermost.sql
(Re)import the data
$ sudo gitlab-psql -d gitlabhq_production -f /tmp/gitlabhq_production.sql $ sudo gitlab-psql -d gitlabhq_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \ sudo gitlab-psql -d gitlabhq_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";' $ sudo gitlab-psql -d mattermost_production -f /tmp/gitlab_mattermost.sql $ sudo gitlab-psql -d mattermost_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \ sudo gitlab-psql -d mattermost_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'
Make use of the shipped PostgreSQL
$ sed -i "s/^gitlab_rails\['db_/#gitlab_rails\['db_/" /etc/gitlab/gitlab.rb && \ sed -i "s/^mattermost\['sql_/#mattermost\['sql_/" /etc/gitlab/gitlab.rb && \ gitlab-ctl reconfigure
Now you should be able to connect to all the Gitlab services again.
Optionally remove the external database
apt-get remove postgresql postgresql-client postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-common
Maybe you also want to purge the old database content
apt-get purge postgresql-9.4
https://manpages.debian.org has been modernized! We have just launched a major update to our manpage repository. What used to be served via a CGI script is now a statically generated website, and therefore blazingly fast.
While we were at it, we have restructured the paths so that we can serve all manpages, even those whose name conflicts with other binary packages (e.g. crontab(5) from cron, bcron or systemd-cron). Don’t worry: the old URLs are redirected correctly.
Furthermore, the design of the site has been updated and now includes navigation panels that allow quick access to the manpage in other Debian versions, other binary packages, other sections and other languages. Speaking of languages, the site serves manpages in all their available languages and respects your browser’s language when redirecting or following a cross-reference.
Much like the Debian package tracker, manpages.debian.org includes packages from Debian oldstable, oldstable-backports, stable, stable-backports, testing and unstable. New manpages should make their way onto manpages.debian.org within a few hours.
The generator program (“debiman”) is open source and can be found at https://github.com/Debian/debiman. In case you would like to use it to run a similar manpage repository (or convert your existing manpage repository to it), we’d love to help you out; just send an email to stapelberg AT debian DOT org.
This effort is standing on the shoulders of giants: check out https://manpages.debian.org/about.html for a list of people we thank.
We’d love to hear your feedback and thoughts. Either contact us via an issue on https://github.com/Debian/debiman/issues/, or send an email to the debian-doc mailing list (see https://lists.debian.org/debian-doc/).
I wanted to write a more in-depth post about RetroPie the Retro Gaming Appliance OS for Raspberry Pis, either technically or more positively, but unfortunately I don't have much positive to write.
What I hoped for was a nice appliance that I could use to play old games from the comfort of my sofa. Unfortunately, nine times out of ten, I had a malfunctioning Linux machine and the time I'd set aside for jumping on goombas was being spent trying to figure out why bluetooth wasn't working. I have enough opportunities for that already, both at work and at home.
I feel a little bad complaining about an open source, volunteer project: in its defence I can say that it is iterating fast and the two versions I tried in a relatively short time span were rapidly different. So hopefully a lot of my woes will eventually be fixed. I've also read a lot of other people get on with it just fine.
Instead, I decided the Nintendo Classic NES Mini was the plug-and-play appliance for me. Alas, it became the "must have" Christmas toy for 2016 and impossible to obtain for the recommended retail price. I did succeed in finding one in stock at Toys R Us online at one point, only to have the checkout process break and my order not go through. Checking Stock Informer afterwards, that particular window of opportunity was only 5 minutes wide. So no NES classic for me!
My adventures in RetroPie weren't entirely fruitless, thankfully: I discovered two really nice pieces of hardware.
The first is Lenovo's ThinkPad Compact Bluetooth Keyboard with TrackPoint, a very compact but pleasant to use Bluetooth keyboard including a trackpoint. I miss the trackpoint from my days as a Thinkpad laptop user. Having a keyboard and mouse combo in such a small package is excellent. My only two complaints would be the price (I was lucky to get one cheaper on eBay) and the fact it's bluetooth only: there's a micro-USB port for charging, but it would be nice if it could be used as a USB keyboard too. There's a separate, cheaper USB model.
The second neat device is a clone of the SNES gamepad by HK company 8bitdo called the SFC30. This looks and feels very much like the classic Nintendo SNES controller, albeit slightly thicker from front to back. It can be used in a whole range of different modes, including attached USB; Bluetooth pretending to be a keyboard; Bluetooth pretending to be a controller; and a bunch of other special modes designed to work with iOS or Android devices in various configurations. The manufacturer seem to be actively working on firmware updates to further enhance the controller. The firmware is presently closed source, but it would not be impossible to write an open source firmware for it (some people have figured out the basis for the official firmware).
I like the SFC30 enough that I spent some time trying to get it working for various versions of Doom. There are just enough buttons to control a 2.5D game like Doom, whereas something like Quake or a more modern shooter would not work so well. I added support for several 8bitdo controllers directly into Chocolate Doom (available from 2.3.0 onwards) and into SDL2, a popular library for game development, which I think is used by Steam, so Steam games may all gain SFC30 support in the future too.
I pledge that before sending each email to the debian-devel mailing list I move forward at least one actionable bug in my packages.
Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.
These release notes are also available on the git-cinnabar wiki.What’s new since 0.3.2?
- Various bug fixes.
- Updated git to 2.11.0 for cinnabar-helper.
- Now supports bundle2 for both fetch/clone and push (https://www.mercurial-scm.org/wiki/BundleFormat2).
- Now Supports git credential for HTTP authentication.
- Now supports git push --dry-run.
- Added a new git cinnabar fetch command to fetch a specific revision that is not necessarily a head.
- Added a new git cinnabar download command to download a helper on platforms where one is available.
- Removed upgrade path from repositories used with version < 0.3.0.
- Experimental (and partial) support for using git-cinnabar without having mercurial installed.
- Use a mercurial subprocess to access local mercurial repositories.
- Cinnabar-helper now handles fast-import, with workarounds for performance issues on macOS.
- Fixed some corner cases involving empty files. This prevented cloning Mozilla’s stylo incubator repository.
- Fixed some correctness issues in file parenting when pushing changesets pulled from one mercurial repository to another.
- Various improvements to the rules to build the helper.
- Experimental (and slow) support for pushing merges, with caveats. See issue #20 for details about the current status.
- Fail graft earlier when no commit was found to graft
- Allow graft to work with git version < 1.9
- Allow git cinnabar bundle to do the same grafting as git push
As the freeze of the next release is closing in, I have updated a bunch of packages around TeX: All of the TeX Live packages (binaries and arch independent ones) and tex-common. I might see whether I get some updates of ConTeXt out, too.
The changes in the binaries are mostly cosmetic: one removal of a non-free (unclear-free) file, and several upstream patches got cherrypicked (dvips, tltexjp contact email, upmendex, dvipdfmx). I played around with including LuaTeX v1.0, but that breaks horribly with the current packages in TeX Live, so I refrained from it. The infrastructure package tex-common got a bugfix for updates from previous releases, and for the other packages there is the usual bunch of updates and new packages. Enjoy!New packages
arimo, arphic-ttf, babel-japanese, conv-xkv, css-colors, dtxdescribe, fgruler, footmisx, halloweenmath, keyfloat, luahyphenrules, math-into-latex-4, mendex-doc, missaali, mpostinl, padauk, platexcheat, pstring, pst-shell, ptex-fontmaps, scsnowman, stanli, tinos, undergradmath, yaletter.Updated packages
acmart, animate, apxproof, arabluatex, arsclassica, babel-french, babel-russian, baskervillef, beamer, beebe, biber, biber.x86_64-linux, biblatex, biblatex-apa, biblatex-chem, biblatex-dw, biblatex-gb7714-2015, biblatex-ieee, biblatex-philosophy, biblatex-sbl, bidi, calxxxx-yyyy, chemgreek, churchslavonic, cochineal, comicneue, cquthesis, csquotes, ctanify, ctex, cweb, dataref, denisbdoc, diagbox, dozenal, dtk, dvipdfmx, dvipng, elocalloc, epstopdf, erewhon, etoolbox, exam-n, fbb, fei, fithesis, forest, glossaries, glossaries-extra, glossaries-french, gost, gzt, historische-zeitschrift, inconsolata, japanese-otf, japanese-otf-uptex, jsclasses, latex-bin, latex-make, latexmk, lt3graph, luatexja, markdown, mathspec, mcf2graph, media9, mendex-doc, metafont, mhchem, mweights, nameauth, noto, nwejm, old-arrows, omegaware, onlyamsmath, optidef, pdfpages, pdftools, perception, phonrule, platex-tools, polynom, preview, prooftrees, pst-geo, pstricks, pst-solides3d, ptex, ptex2pdf, ptex-fonts, qcircuit, quran, raleway, reledmac, resphilosophica, sanskrit, scalerel, scanpages, showexpl, siunitx, skdoc, skmath, skrapport, smartdiagram, sourcesanspro, sparklines, tabstackengine, tetex, tex, tex4ht, texlive-scripts, tikzsymbols, tocdata, uantwerpendocs, updmap-map, uplatex, uptex, uptex-fonts, withargs, wtref, xcharter, xcntperchap, xecjk, xellipsis, xepersian, xint, xlop, yathesis.
When I asked why not Debian, the answer was that it was very difficult to install and manage.It's all about design, IMHO.
Installer, website, wiki... It should be "simple", not verbose, not cheap.
Issue ticket #20 demonstrated that we had not yet set up Windows for version 3 of Google Protocol Buffers ("Protobuf") -- while the other platforms support it. So I made the change, and there is release 0.4.8.
RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.
The NEWS file summarises the release as follows:Changes in RProtoBuf version 0.4.8 (2017-01-17)
CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.
Debian is very difficult, a puzzle. This surprising statement was what I got last week when talking with a group of new IT students (and their teachers).
I would like to write down here what I was able to obtain from that conversation.
From time to time, as part of my job at CICA, we open the doors of our datacenter to IT students from all around Andalusia (our region) who want to learn what we do here and how we do it. All our infraestructure and servers are primarily built using FLOSS software (we have some exceptions, like backbone routers and switches), and the most important servers run Debian.
As part of the talk, when I am in such a meeting with a visiting group, I usually ask about which technologies they use and learn in their studies. The other day, they told me they use mostly Ubuntu and a bit of Fedora.
When I asked why not Debian, the answer was that it was very difficult to install and manage. I tried to obtain some facts about this but I failed in what seems to be a case of bad fame, a reputation problem which was extended among the teachers and therefore among the students. I didn’t detect any branding biasing or the like. I just seems lack of knowledge, and bad Debian reputation.
Using my DD powers and responsabilities, I kindly asked for feedback to improve our installer or whatever they may find difficult, but a week later I have received no email so far.
Then, what I obtain is nothing new:
- we probably need more new-users feedback
- we have work to do in the marketing/branding area
- we have very strong competitors out there
- we should keep doing our best
I myself recently had to use the Ubuntu installer in a laptop, and it didn’t seem that different to the Debian one: same steps and choices, like in every other OS installation.
Please, spread the word: Debian is not difficult. Certainly not perfect, but I don’t think that installing and using Debian is such a puzzle.
In my ongoing quest to get Tablet-Mode working on my Hybrid machine, here's how I've been living with it so far. My intent is to continue using Free Software for both use cases. My wishful thought is to use the same software under both use cases.
- Browser: On the browser front, things are pretty decent. Chromium has good support for Touchscreen input. Most of the Touchscreen use cases work well with Chromium. On the Firefox side, after a huge delay, finally, Firefox seems to be catching up. Hopefully, with Firefox 51/52, we'll have a much more usable Touchscreen browser.
- Desktop Shell: One of the reason of migrating to GNOME was its touch support. From what I've explored so far, GNOME is the only desktop shell that has touch support natively done. The feature isn't complete yet, but is fairly well usable.
- Given that GNOME has touchscreen support native, it is obvious to be using GNOME equivalent of tools for common use cases. Most of these tools inherit the touchscreen capabilities from the underneath GNOME libraries.
- File Manager: Nautilus has decent support for touch, as a file manager. The only annoying bit is a right-click equivalent. Or in touch input sense, a long-press.
- Movie Player: There's a decent movie player, based on GNOME libs; GNOME MPV. In my limited use so far, this interface seems to have good support. Other contenders are:
- SMPlayer is based on Qt libs. So initial expectation would be that Qt based apps would have better Touch support. But I'm yet to see any serious Qt application with Touch input support. Back to SMPlayer, the dev is pragmatic enough to recognize tablet-mode users and as such has provided a so called "Tablet Mode" view for SMPlayer (The tooltip did not get captured in the screenshot).
- MPV doesn't come with a UI but has basic management with OSD. And in my limited usage, the OSD implementation does seem capable to take touch input.
- Books / Documents: GNOME Documents/Books is very basic in what it has to offer, to the point that it is not much useful. But since it is based on the same GNOME libraries, it enjoys native touch input support. Calibre, on the other hand, is feature rich. But it is based on (Py)Qt. Touch input is told to work for Windows. For Linux, there's no support yet. The good thing about Calibre is that it has its own UI, which is pretty decent in a Tablet-Mode Touch workflow.
- Photo Management: With compact digital devices commonly available, digital content (Both Photos and Videos) is on the rise. The most obvious names that come to mind are Digikam and Shotwell.
- Shotwell saw its reincarnation in the recent past. From what I recollect, it does have touch support but was lacking quite a bit in terms of features, as compared to Digikam.
- Digikam is an impressive tool for digital content management. While Digikam is a KDE project, thankfully it does a great job in keeping its KDE dependencies to a bare minimum. But given that Digikam builds on KDE/Qt libs, I haven't had any much success in getting a good touch input solution for Tablet Mode. To make it barely usable in Table-Mode, one could choose a theme preference with bigger toolbars, labels and scrollbars. This helps in making a touch input workaround use case. As you can see, I've configured the Digikam UI with Text alongside Icons for easy touch input.
- Email: The most common use case. With Gmail and friends, many believe standalone email clients are no more a need. But there always are users like us who prefer emails offline, encrypted emails and prefer theis own email domains. Many of these are still doable with free services like Gmail, but still.
- Thunderbird shows its age at times. And given the state of Firefox in getting touch support (and GTK3 port), I see nothing happening with TB.
- KMail was something I discontinued while still being on KDE. The debacle that KDEPIM was, is something I'd always avoid, in the future. Complete waste of time/resource in building, testing, reporting and follow-ups.
- Geary is another email client that recently saw its reincarnation. I recently had explored Geary. It enjoys similar benefits like the rest applications using GNOME libraries. There was one touch input bug I found, but otherwise Geary's featureset was limited in comparison to Evolution.
- Migration to Evolution, when migrating to GNOME, was not easy. GNOME's philosophy is to keep things simple and limited. In doing that, they restrict possible flexibilities that users may find obvious. This design philosophy is easily visible across all applications of the GNOME family. Evolution is no different. Hence, coming from TB to E was a small unlearning + newLearning curve. And since Evolution is using the same GNOME libraries, it enjoys similar benefits. Touch input support in Evolution is fairly good. The missing bit is the new Toolbar and Menu structure that many have noticed in the newer GNOME applications (Photos, Documents, Nautilus etc). If only Evolution (and the GNOME family) had the option of customization beyond the developer/project's view, there wouldn't be any wishful thoughts.
- Above is a screenshot of 2 windows of Evoluiton. In its current form too, Evolution is a gem at times. For my RSS feeds, they are stored in a VFolder in Evolution, so that I can read them when offline. RSS feeds are something I read up in Tablet-mode. On the right is an Evolution window with larger fonts, while on the left, Evoltuion still retains its default font size. This current behavior helps me get Table-Mode Touch working to an extent. In my wishful thoughts, I wish if Evolution provided flexibility to change Toolbar icon sizes. That'd really help easily touch the delete button when in Tablet Mode. A simple button, Tablet Mode, like what SMPlayer has done, would keep users sticky with Evolution.
My wishful thought is that people write (free) software, thinking more about usability across toolkits and desktop environments. Otherwise, the year of the Linux desktop, laptop, tablet; in my opinion, is yet to come. And please don't rip apart tools, in porting them to newer versions of the toolkits. When you rip a tool, you also rip all its QA, Bug Reporting and Testing, that was done over the years.
In this whole exercise of getting a hybrid working setup, I also came to realize that there does not seem to be a standardized interface, yet, to determine the current operating mode of a running hybrid machine. From what we explored so far, every product has its own way to doing it. Most hybrids come pre-installed and supported with Windows only. So, their mode detection logic seems to be proprietary too. In case anyone is awaer of a standard interface, please drop a note in the comments.Categories:
One of the PGP Clean Room’s aims is to provide users with the option to easily initialize one or more smartcards with personal info and pins, and subsequently transfer keys to the smartcard(s). The advantage of using smartcards is that users don’t have to expose their keys to their laptop for daily certification, signing, encryption or authentication purposes.
I started building a basic whiptail TUI that asks users if they will be using a smartcard:
If yes, whiptail provides the user with the opportunity to initialize the smartcard with their name, preferred language and login, and change their admin PIN, user PIN, and reset code.
I outlined the commands and interactions necessary to edit personal info on the smartcard using gpg --card-edit and sending the keys to the card with gpg --edit-key <FPR> in smartcard-workflow. There’s no batch mode for smartcard operations and there’s no “quick” command for it just yet (as in –quick-addkey). One option would be to try this out with command-fd/command-file. Currently, python bindings for gpgme are under development so that is another possibility.
We can use this workflow to support two smartcards– one for the primary key and one for the subkeys, a setup that would also support subkey rotation.
In December, about 175 work hours have been dispatched among 14 paid contributors. Their reports are available:
- Antoine Beaupré did 20.5 hours (out of 13.5 hours allocated + 7 remaining hours).
- Balint Reczey did 10 hours (out of 13.5 hours allocated, thus keeping 2.5 hours for January).
- Ben Hutchings did 10 hours (out of 13.5 hours allocated + 2 hours remaining, thus keeping 5.5 extra hours for January).
- Brian May did 10 hours.
- Chris Lamb did 13.5 hours.
- Emilio Pozuelo Monfort did 11 hours (out of 13.5 hours allocated, thus keeping 2.5 extra hours for January).
- Guido Günther did 8 hours.
- Hugo Lefeuvre did 11 hours (out of 13.5 hours allocated, thus keeping 2.5 extra hours for January).
- Jonas Meurer did 5.25 hours (out of 12 hours allocated, thus keeping 6.75 extra hours for January).
- Markus Koschany did 13.5 hours.
- Ola Lundqvist did 13.5 hours.
- Raphaël Hertzog did 10 hours.
- Roberto C. Sanchez did 13.5 hours.
- Thorsten Alteholz did 13.5 hours.
The number of sponsored hours did not increase but a new silver sponsor is in the process of joining. We are only missing another silver sponsor (or two to four bronze sponsors) to reach our objective of funding the equivalent of a full time position.
New sponsors are in bold.
- Platinum sponsors:
- Gold sponsors:
- The Positive Internet (for 30 months)
- Blablacar (for 29 months)
- Linode LLC (for 19 months)
- Babiel GmbH (for 8 months)
- Plat’Home (for 8 months)
- Silver sponsors:
- Domeneshop AS (for 29 months)
- Université Lille 3 (for 29 months)
- Trollweb Solutions (for 27 months)
- Nantes Métropole (for 23 months)
- University of Luxembourg (for 21 months)
- Dalenys (for 20 months)
- Univention GmbH (for 15 months)
- Université Jean Monnet de St Etienne (for 15 months)
- Sonus Networks (for 9 months)
- UR Communications BV (for 3 months)
- maxcluster GmbH (for 3 months)
- Bronze sponsors:
- David Ayers – IntarS Austria (for 30 months)
- Evolix (for 30 months)
- Offensive Security (for 30 months)
- Seznam.cz, a.s. (for 30 months)
- Freeside Internet Service (for 29 months)
- MyTux (for 29 months)
- Linuxhotel GmbH (for 27 months)
- Intevation GmbH (for 26 months)
- Daevel SARL (for 25 months)
- Bitfolk LTD (for 24 months)
- Megaspace Internet Services GmbH (for 24 months)
- Greenbone Networks GmbH (for 23 months)
- NUMLOG (for 23 months)
- WinGo AG (for 22 months)
- Ecole Centrale de Nantes – LHEEA (for 19 months)
- Sig-I/O (for 16 months)
- Entr’ouvert (for 14 months)
- Adfinis SyGroup AG (for 11 months)
- Laboratoire LEGI – UMR 5519 / CNRS (for 6 months)
- Quarantainenet BV (for 6 months)
- GNI MEDIA (for 5 months)
- RHX Srl (for 3 months)
2 more weeks of my awesome Outreachy journey have passed, so it is time to make an update on my progress.
I continued my work on improving diffoscope by fixing bugs and completing wishlist items. These include:Improving APK support
I worked on #850501 and #850502 to improve the way diffoscope handles APK files. Thanks to Emanuel Bronshtein for providing clear description on how to reproduce these bugs and ideas on how to fix them.
And special thanks to Cris Lamb for insisting on providing tests for these changes! That part actually proved to be little more tricky, and I managed to mess up with these tests (extra thanks to Chris for cleaning up the mess I created). Hope that also means I learned something from my mistakes.
Also, I was pleased to see F-droid Verification Server as a sign of F-droid progress on reproducible builds effort - I hope these changes to diffoscope will help them!Adding support for image metadata
That came from #849395 - a request was made to compare image metadata along with image content. Diffoscope has support for three types of images: JPEG, MS Windows Icon (*.ico) and PNG. Among these, PNG already had good image metadata support thanks to sng tool, so I worked on .jpeg and .ico files support. I initially tried to use exiftool for extracting metadata, but then I discovered it does not handle .ico files, so I decided to use a bigger force - ImageMagick’s identify - for this task. I was glad to see it had that handy -format option I could use to select only the necessary fields (I found their -verbose, well, too verbose for the task) and presenting them in the defined form, negating the need of filtering its output.
What was particulary interesting and important for me in terms of learning: while working on this feature, I discovered that, at the moment, diffoscope could not handle .ico files at all - img2txt tool, that was used for retrieving image content, did not support that type of images. But instead of recognizing this as a bug and resolving it, I started to think of possible workaround, allowing for retrieving image metadata even after retrieving image content failed. Definetely not very good thinking. Thanks Mattia Rizzolo for actually recognizing this as a bug and filing it, and Chris Lamb for fixing it!Other work Order-like differences, part 2
In the previous post, I mentioned Lunar’s suggestion to use hashing for finding order-like difference in wide variety of input data. I implemented that idea, but after discussion with my mentor, we decided it is probably not worth it - this change would alter quite a lot of things in core modules of diffoscope, and the gain would be not really significant.
Still, implementing that was an important experience for me, as I had to hack on deepest and, arguably, most difficult modules of diffoscope and gained some insight on how they work.Comparing with several tools (work in progress)
Although my initial motivation for this idea was flawed (the workaround I mentioned earlier for .ico files), it still might be useful to have a mechanism that would allow to run several commands for finding difference, and then give the output of those that succeed, failing if and only if they all have failed.
One possible case when it might happen is when we use commands coming from different tools, and one of them is not installed. It would be nice if we still used the other and not the uninformative binary diff (that is a default fallback option for when something goes wrong with more “clever” comparison). I am still in process of polishing this change, though, and still in doubt if it is needed at all.Side note - Outreachy and my university progress
In my Outreachy application, I promised that if I am selected into this round, I will do everything I can to unload the required time period from my university time commitements. I did that by moving most of my courses to the first half of the academic year. Now, the main thing that is left for me to do is my Master’s thesis.
I consulted my scientific advisors from both universities that I am formally attending (SFEDU and LUT - I am in double degree program), and as a result, they agreed to change my Master’s thesis topic to match my Outreachy work.
Now, that should have sounded like an excellent news - merging these activities together actually mean I can allocate much more time to my work on reproducible builds, even beyond the actual internship time period. That was intended to remove a burden from my shoulders.
Still, I feel a bit uneasy. The drawback of this decision lies in fact I have no idea on how to write scientific report based on pure practical work. I know other students from my universities have done such things before, but choosing my own topic means my scientific advisors can’t help me much - this is just out of their area of expertise.
Well, wish me luck - I’m up to the challenge!
I decided to take a crack at an audio representation of network traffic. The solaris version of ping used to have an audio option, which would produce sound for successful pings. In the past I've used audio queues to monitor events like service health and build status.
It seemed like you could produce audio to give an overall feel for what was happening on the network. I was imagining a quick listen would be able to answer questions like:
- How busy is the network
- How many sources are active
- Is the traffic a lot of streams or just a few?
- Are there any interesting events such as packet loss or congestion collapse going on?
- What's the mix of services involved
I divided the project into three segments, which I will write about in future entries:
- What parts of the network to model
- How to present the audio information
- Tools and implementation
I'm fairly happy with what I have. It doesn't represent all the items above. As an example, it doesn't directly track packet loss or retransmissions, nor does it directly distinguish one service from another. Still, just because of the traffic flow, rsync sounds different from http. It models enough of what I'm looking for that I find it to be a useful tool. And I learned a lot about music and Linux audio. I also got to practice designing discrete-time control functions in ways that brought back the halls of MIT.
Yesterday afternoon, the nineth update in the 0.12.* series of Rcpp made it to the CRAN network for GNU R. Windows binaries have by now been generated; and the package was updated in Debian too. This 0.12.9 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, and the 0.12.8 release in November --- making it the thirteenth release at the steady bi-montly release frequency.
Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 906 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by sixthythree packages over the two months since the last release -- or about a package a day!
Some of the changes in this release are smaller and detail-oriented. We did squash one annoying bug (stemming from the improved exception handling) in Rcpp::stop() that hit a few people. Nathan Russell added a sample() function (similar to the optional one in RcppArmadillo; this required a minor cleanup by for small number of other packages which used both namespaces 'opened'. Date and Datetime objects now have format() methods and << output support. We now have coverage reports via covr as well. Last but not least James "coatless" Balamuta was once more tireless on documentation and API consistency --- see below for more details.Changes in Rcpp version 0.12.9 (2017-01-14)
Changes in Rcpp API:
The exception stack message is now correctly demangled on all compiler versions (Jim Hester in #598)
Date and Datetime object and vector now have format methods and operator<< support (#599).
Changes in Rcpp Sugar:
Changes in Rcpp unit tests
Changes in Rcpp Documentation:
Changes in Rcpp build system
Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.
We all love our project and want to make sure Debian still shines in the next decades (and centuries!). One way to secure that goal is to identify elements/events/things which could put that goal at risk. To this end, we've organized a short S.W.O.T analysis session at DebConf16. Minutes of the meeting can be found here. I believe it is an interesting read and is useful for Debian old-timers as well as newcomers. It helps to convey a better understanding of the project's status. For each item, we've tried to identify an action.
Here are a few things we've worked on:
- Identify new potential contributors by attending and speaking at conferences where Free and Open Sources software are still not very well-known, or where we have too few contributors.
Each Debian developer is encouraged to identify events where we can promote FOSS and Debian. As DPL, I'd be happy to cover expenses to attend such events.
- Our average age is also growing over the years. It is true that we could attract more new contributors than we already do.
We can organize short internships. We should not wait for students to come to us. We can get in touch with universities and engineering schools and work together on a list of topics. It is easy and will give us the opportunity to reach out to more students.
It is true that we have tried in the past to do that. We may organize a sprint with interested people and share our experience on trying to do internships on Debian-related subjects. If you have successfully done that in the past and managed to attract new contributors that way, please share your experience with us!
If you see other ways to attract new contributors, please get in touch so that we can discuss!
- Not easy to get started in the project.
It could be argued that all the information is available, but rather than being easily findable from on starting point, it is scattered over several places (documentation on our website, wiki, metadata on bug reports, etc…).
Fedora and Mozilla both worked on this subject and did build a nice web application to make this easier and nicer. The result of this is asknot-ng.
A whatcanidofor.debian.org would be wonderful! Any takers? We can help by providing a virtual machine to build this. Being a DD is not mandatory. Everyone is welcome!
- Cloud images for Debian.
This is a very important point since cloud providers are now major distributions consumers. We have to ensure that Debian is correctly integrated in the cloud, without making compromises on our values and philosophy.
I believe this item has been worked on during the last Debian Cloud sprint. I am looking forward to seeing the positive effects of this sprint in the long term. I believe it does help us to build a stronger relationship with cloud providers and gives us a nice opportunity to work with them on a shared set of goals!
In the meantime, everyone should feel free to pick one item from the list and work on it. :-)
TL;DR; If you use NetAddr::IP->new6() for resolving DNS names to IPv6 addresses, the addresses returned by NetAddr::IP are not what you might expect. See below for details.Issue #2 in UIF
Over the last couple of days, I tried to figure out the cause of a weird issue observed in UIF (Universal Internet Firewall , a nice Perl tool for setting up ip(6)tables based Firewalls).
Already a long time ago, I stumbled over a weird DNS resolving issue of DNS names to IPv6 addresses in UIF that I reported as issue #2  against upstream UIF back then.
I happen to be co-author of UIF. So, I felt very ashamed all the time for not fixing the issue any sooner.
As many of us DDs try to get our packages into shape before the next Debian release these days, I find myself doing the same. I started investigating the underlying cause of issue #2 in UIF a couple of days ago.Issue #119858 on CPAN
Today, I figured out that the Perl code in UIF is not causing the observed phenomenon. The same behaviour is reproducible with a minimal and pure NetAddr::IP based Perl script (reported as Debian bug #851388 . Thanks to Gregor Herrmann for forwarding Debian bug upstream (#119858 ).
Here is the example script that shows the flawed behaviour:
#!/usr/bin/perl use NetAddr::IP; my $hostname = "google-public-dns-a.google.com"; my $ip6 = NetAddr::IP->new6($hostname); my $ip4 = NetAddr::IP->new($hostname); print "$ip6 <- WTF???\n"; print "$ip4\n"; exit(0);
[mike@minobo ~]$ ./netaddr-ip_resolv-ipv6.pl 0:0:0:0:0:0:808:808/128 <- WTF??? 126.96.36.199/32In words...
So what happens in NetAddr::IP is that with the new6() "constructor" you initialize a new IPv6 address. If the address is a DNS name, NetAddr::IP internally resolves it into an IPv4 address and converts this IPv4 address into some IPv6'ish format. This bogus IPv6 address is not the one matching the given DNS name.Impacted Software in Debian
Various Debian packages use NetAddr::IP and may be affected by this flaw, here is an incomplete list (use apt-rdepends -r libnetaddr-ip-perl for the complete list):
Any of the above packages could be affected if NetAddr::IP->new6(<dnsname>) is being used. I haven't checked any of the code bases, but possibly the corresponding maintainers may want to do that.References
-  https://github.com/cajus/uif/
-  https://github.com/cajus/uif/issues/2
-  https://rt.cpan.org/Public/Bug/Display.html?id=119858
-  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=851388