Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 2 hours 29 min ago

Norbert Preining: Python 3 deprecation imminent

13 November, 2019 - 15:23

OSS Journal, November 2026. In less than two month, with the end of the year 2026, Python 3 will be deprecated and will not obtain any further security updates. Despite the announcement of deprecation back in summer 2020, shortly after the deprecation of Python 2, still thousands of software projects, in particular in data science, seem to be still based on Python 3.

After the initially quick uptake of Python 4, announced in June 2020, most developers have switched to the clearly superior version which resolves long-standing discrepancies in the Python variable semantics. Unfortunately, Python 4 is not backward compatible with Python 3 (not to think of Python 2).

In a recent interview with the OSS Journal, the Python Head Developer stated:

The future is with Python 4 – we have worked hard to make Python 4 the best programming language out there, and we expect it to serve the community for a long future. Having announced the deprecation of Python 3 well in advance (5 years ago), we expect everyone to have updated their code by now.

The Python developer community has enthusiastically embraced Python 4, and we see no reason to prolongue the outdated Python 3 language just for a few data scientists.

The Python 3 deprecation has created a whole new branch of companies providing only Python upgrade services, but despite the abundance of these services, many programs are still available only for Python 3, some – like Calibre – even only for Python 2.

So let us use the remaining month to fix the billions of lines of code still not compatible with Python 4, for a better future! Rest assured, it will be the last incompatible Python upgrade (for now).

Dirk Eddelbuettel: RcppAnnoy 0.0.14

12 November, 2019 - 19:13

A new minor release of RcppAnnoy is now on CRAN, following the previous 0.0.13 release in September.

RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the famous Spotify music discovery algorithm.

This release once again allows compilation on older compilers. The 0.0.13 release in September brought very efficient 512-bit AVX instruction to accelerate computations. However, this could not be compiled on older machines so we caught up once more with upstream to update to conditional code which will fall back to either 128-bit AVX or no AVX, ensuring buildability “everywhere”.

Detailed changes follow below.

Changes in version 0.0.14 (2019-11-11)
  • RcppAnnoy again synchronized with upstream to ensure builds with older compilers without AVX512 instructions (Dirk #53).

  • The cleanup script only uses /bin/sh.

Courtesy of CRANberries, there is also a diffstat report for this release.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Russ Allbery: Review: Mary Poppins

11 November, 2019 - 11:04

Review: Mary Poppins, by P.L. Travers

Series: Mary Poppins #1 Illustrator: Mary Shepard Publisher: Houghton Mifflin Harcourt Copyright: 1934 Printing: 2014 ISBN: 0-544-57475-3 Format: Kindle Pages: 202

I read this book as part of Mary Poppins: 80th Anniversary Collection, which includes the first four books of the series.

I have a long-standing irritation with movies that become so famous that they swallow the book on which they were based, largely because of three examples from my childhood: Bambi (the book is so much better), The Wizard of Oz (the book is... not really better, but the rest of the series certainly is), and Mrs. Frisby and the Rats of NIMH (the book is so much more). That irritation is sometimes misplaced, however. Even Disney has been known to make a mediocre book into a better movie on occasion (The Hundred and One Dalmations). When Mary Poppins came up recently (a movie I adored as a kid), I vaguely remembered having read the book long ago, couldn't remember anything about it, and wondered what side of the fence it would come down on. Since an anniversary collection of the first four books was free with Amazon Prime, it was easy to find out.

Answer: perhaps the series improves in later books, but the movie is totally different in tone from the first book, and much better.

I am surprised that I'd forgotten as much about this book as I had, even though it's been at least thirty years since I've read it, since it is extremely odd. I suspect the highly episodic structure is to blame. Mary Poppins never develops into a proper story; instead, it's a series of vignettes about the titular character, the Banks family, and other people on Cherry-Tree Lane. Some of these stories will be familiar from the movie (Uncle Albert floating up into the air because of his laughter). Some will definitely not be, such as a (very brief) trip around the world via a magical compass, a visit from a star who is Christmas shopping, or a truly bizarre birthday celebration for Mary Poppins in the zoo. Unlike the movie, there is no unifying theme of Mary Poppins fixing the Banks's family problems; quite to the contrary, she seems entirely uninterested and even oblivious to them.

This is not Julie Andrews's kind, gentle, and magically competent nurse. There aren't two separate advertisements for her job; this Mary Poppins appears after Mrs. Banks sent letters to the papers advertising for a position and blithely dismisses her request for references. She is neither kind nor gentle, although by the end of the book one gets the feeling she's brought a sort of gruff stability to the household. Like the movie character, she does take the children on adventures, but they seem almost accidental, a side effect of being around Mary Poppins and thus inadvertantly involved in her business (which she rarely, if ever, explains). It's a more intimidating and fae strangeness, only slightly explained by a chapter that reveals that all children know how to talk to the sun and the wind and the animals but forget when they turn one, except for Mary Poppins. (Ode: Intimations of Immortality has a lot of depressing philosophy to answer for.)

Perhaps the oddest difference from the movie for me is that Travers's Mary Poppins is endlessly vain. She's constantly admiring herself in shop windows or finding just the right clothes, much to the frequent boredom of the children. It's an amusing take on a child's view of adult shopping trips, but the vanity and preening feels weirdly out of place for such a magical character.

There is no change in the Banks household in this book; perhaps there is more material in the later books. (The whole series appears to be eight volumes.) When the wind changes, Mary Poppins disappears as mysteriously as she appears, not even saying goodbye, although she does leave some gifts. By that point, Jane and Michael do seem fond of her, although I'm not entirely sure why. Yes, there are adventures, but outside of them, and even during them, Mary Poppins is short, abrupt, demanding, and fond of sharp and dismissive aphorisms. Gregory Maguire proclaims his preference for the books in the foreword on the grounds that they show more glimmers of mystery and danger, and I can see that if I squint. But I mostly found her unpleasant, dictatorial, irritating, and utterly unwilling to explain anything to curious children.

On this point, I'll dare to disagree with Maguire and prefer the Disney version.

A few of the stories here were fun and entertaining in ways not provided by the movie, particularly "Miss Lark's Andrew" (the successful escape of a neighbor dog from enforced isolation and unnatural pampering) and "Christmas Shopping" (I do hope the Pleiades liked their presents!). But when I get the urge for Mary Poppins, I think I'll turn to the movie with no regrets. This is an interesting curiosity, and perhaps subsequent books add more depth (and make Mary less obnoxious), but I don't think it's worth seeking out.

Followed by Mary Poppins Comes Back.

Rating: 6 out of 10

Keith Packard: picolibc-hello-world

11 November, 2019 - 05:54
Picolibc Hello World Example

It's hard to get started building applications for embedded RISC-V and ARM systems. You need to at least:

  1. Find and install the toolchain

  2. Install a C library

  3. Configure the compiler for the right processor

  4. Configure the compiler to select the right headers and libraries

  5. Figure out the memory map for the target device

  6. Configure the linker to place objects in the right addresses

I've added a simple 'hello-world' example to picolibc that shows how to build something that runs under qemu so that people can test the toolchain and C library and see what values will be needed from their hardware design.

The Source Code

Getting text output from the application is a huge step in embedded system development. This example uses the “semihosting” support built-in to picolibc to simplify that process. It also explicitly calls exit so that qemu will stop when the demo has finished.

#include <stdio.h>
#include <stdlib.h>

int
main(void)
{
    printf("hello, world\n");
    exit(0);
}
The Command Line

The hello-world documentation takes the user through the steps of building the compiler command line, first using the picolibc.specs file to specify header and library paths:

gcc --specs=picolibc.specs

Next adding the semihosting library with the --semihost option (this is an option defined in picolibc.specs which places -lsemihost after -lc):

gcc --specs=picolibc.specs --semihost

Now we specify the target processor (switching to the target compiler here as these options are target-specific):

riscv64-unknown-elf-gcc --specs=picolibc.specs --semihost -march=rv32imac -mabi=ilp32

or

arm-none-eabi-gcc --specs=picolibc.specs --semihost -mcpu=cortex-m3

The next step specifies the memory layout for our emulated hardware, either the 'spike' emulation for RISC-V:

riscv64-unknown-elf-gcc --specs=picolibc.specs --semihost -march=rv32imac -mabi=ilp32 -Thello-world-riscv.ld

with hello-world-riscv.ld containing:

__flash = 0x80000000;
__flash_size = 0x00080000;
__ram = 0x80080000;
__ram_size = 0x40000;
__stack_size = 1k;
INCLUDE picolibc.ld

or the mps2-an385 for ARM:

arm-none-eabi-gcc --specs=picolibc.specs --semihost -mcpu=cortex-m3 -Thello-world-arm.ld

with hello-world-arm.ld containing:

__flash =      0x00000000;
__flash_size = 0x00004000;
__ram =        0x20000000;
__ram_size   = 0x00010000;
__stack_size = 1k;
INCLUDE picolibc.ld

Finally, we add the source file name and target elf output:

riscv64-unknown-elf-gcc --specs=picolibc.specs --semihost
-march=rv32imac -mabi=ilp32 -Thello-world-riscv.ld -o
hello-world-riscv.elf hello-world.c

arm-none-eabi-gcc --specs=picolibc.specs --semihost
-mcpu=cortex-m3 -Thello-world-arm.ld -o hello-world-arm.elf
hello-world.c
Summary

Picolibc tries to make things a bit simpler by offering built-in compiler and linker scripts along with default startup code to try and make building your first embedded application easier.

Markus Koschany: My Free Software Activities in October 2019

11 November, 2019 - 05:17

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Python 3 ports: I reviewed and sponsored krank and solarwolf for Reiner Herrmann. Thanks to his diligent work two more Python games were ported to Python 3. He also packaged a new upstream release of hyperrogue and improved the build system. Less memory is required to build hyperrogue now and some buildd are thankful for that.
  • The bullet transition got finally approved and completed successfully.
  • I uploaded a new version of pygame-sdl2 to experimental which supports Python 3 now. However the library is still exclusively needed for renpy but upstream hasn’t finished the porting work to Python 3 yet. Hopefully this will be done next year. That means the new version of renpy which I also packaged this month still depends on Python 2.
  • I fixed two bugs in Freeciv, the famous strategy game, by replacing fonts-noto-cjk with fonts-unfonts-core. (#934588) The latter fonts looks apparently better on ordinary screens. The second one was simple to fix, I just had to remove an unneeded Python 2 build-dependency. (#936553)
  • The strategy game asc, a neat clone of Battle Isle 2, also needed some attention this month. I had to replace libwxgtk3.0-dev with libwxgtk3.0-gtk3-dev. (#943439)
  • I did a QA upload of open-invaders because the maintainer email address was bouncing. The game needs a new maintainer.
Debian Java Misc
  • I packaged a new version of privacybadger, and backported ublock-origin  to Stretch and Buster because the addon was incompatible with the latest Firefox ESR release.
Debian LTS

This was my 44. month as a paid contributor and I have been paid to work 22,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 14.10.2019 until 20.10.2019 and from 28.10.2019 until 03.11.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in wordpress, ncurses, opencv, pillow, poppler, golang, gdal, lz4, python-reportlab, ruby-haml, vips, rdesktop, modsecurity-crs, zabbix, polarssl and tika.
  • DLA-1960-1. Issued a security update for wordpress fixing 7 CVE.
  • DLA-1966-1. Issued a security update for aspell fixing 1 CVE.
  • DLA-1973-1. Issued a security update for libxslt fixing 1 CVE.
  • DLA-1978-1. Issued a security update for python-ecdsa fixing 2 CVE.
  • DLA-1982-1. Issued a security update for openafs fixing 2 CVE.
  • I triaged 17 CVE in libgig and forwarded the result upstream. After the investigation I decided to mark these issues as no-dsa because all in all the security risk was low. (#931309)
ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 “Wheezy”. This was my seventeenth month and I have been assigned to work 15 hours on ELTS plus five hours from September. I used 8 of them for the following:

  • ELA-185-1. Issued a security update for libxslt fixing 1 CVE.
  • ELA-186-1. Issued a security update for libssh2 fixing 1 CVE.
  • ELA-187-1. Issued a security update for cpio fixing 1 CVE. The update was prepared by Ola Lundqvist.
  • ELA-188-1. Issued a security update for djvulibre fixing 1 CVE.
  • I worked on OpenJDK 7. I contacted upstream and asked for a new IcedTea release on which we rely for packaging new upstream releases of OpenJDK. The release is still delayed.

Russ Allbery: Review: Happy Ever After

10 November, 2019 - 11:58

Review: Happy Ever After, by Paul Dolan

Publisher: Penguin Copyright: 2019 ISBN: 0-241-28445-7 Format: Kindle Pages: 186

Paul Dolan is a professor of behavioral science at the London School of Economics, but grew up a working-class kid in a council estate (UK public housing; the US equivalent is the projects) in London. This intentionally provocative book looks at a list of nine things that we have been taught to believe will make us happy and presents evidence against that assumption. Dolan's goal is to question social narratives of success and to advocate for a different philosophical basis for life decisions: minimizing misery (negative utilitarianism), rather than trying to maximize happiness.

Happy Ever After is an argument against rules and, specifically, against judging people by rules rather than outcomes:

There is nothing inherently good or bad in a social narrative in itself; it can only ever be judged according to the costs and benefits of adhering to it in a given context. I therefore adopt a consequentialist position in contrast to a deontological one. A consequentialist's view on theft would be that it is only ever wrong when it causes more misery than it promotes happiness, whereas a deontologist would be duty bound to argue that theft is always wrong because moral value lies in certain rules of conduct. A deontological perspective typically does not allow for the importance of context. And yet I would contend that it is morally right to steal to feed your hungry child.

This is obviously a drastically simplified explanation of a complex philosophical debate, but those of you who know my political beliefs probably see why I picked up this book.

Before I dive into the details, though, one note about accuracy. One of Dolan's most provocative claims is that marriage does not, on average, make women happy, a claim repeated in an article in The Guardian (now amended to remove the claim). This claim is not supported by the data he references in this book. It was based on a misunderstanding of the coding of results in the American Time Use Survey and has been subsequently retracted by Dolan.

This is a good caution to have in the back of your mind. Dolan, as is typical for a book of this sort, cites a lot of surveys and statistics. At least some of those citations are wrong. Many more are probably unreproducible. This is not a problem unique to Dolan; as the Vox article points out, most books are fact-checked only by the author, and even academic papers have fallen prey to the replication crisis. Hard factual data, particularly about psychology, is hard to come by.

How fatal this is for Dolan's book is a judgment for the reader. Personally, I'm dubious of most psychological studies and read books like this primarily for opportunities of insight into my own life and my own decision-making processes. Whether or not statistics say that marriage makes women happier on average, Dolan's actual point stands: there is no reason to believe that marriage will necessarily make any specific woman happier, and thus pursuit of marriage as a universal life goal is on dubious ground. The key contention of Happy Ever After, in my reading of it, is that we measure ourselves and others against universal social narratives and mete out punishment for falling short, even if there is no reason to believe that social narrative should be universal. That in turn is a cause of unnecessary misery in the world that we could avoid.

Dolan divides his material into three meta-narratives, each with three sub-narratives: reaching (composed of wealthy, successful, and educated), related (married, monogamous, and children), and responsible (altruistic, healthy, and volitional). For each, he provides some data questioning whether following that narrative truly makes us happy, and looks at ways where the narrative itself may be making us unhappy. Each chapter starts with a simple quiz that asks the reader to choose between a life (first for oneself and then for one's friend) that fulfills that narrative but makes them feel miserable frequently and a life that does not fulfill that narrative but in which they rarely feel miserable. At the end of each section, Dolan shows the results of that survey, all of which show at least some support (surprising to me) for choosing the narrative despite the cost of being miserable.

Some of these chapters I found unsurprising. I'm unmarried and don't intend to have children, so the chapters on marriage and children struck me as relatively obvious. Similarly, the lack of positive happiness benefit of wealth beyond a rather modest level is well-known, although I thought Dolan failed to engage sufficiently with the risk of misery from being poor. A significant motivation for pursuing modest wealth for many people is to acquire a form of self-insurance against financial disasters, particularly in the US with our appalling lack of a safety net.

I had the most mental arguments with Dolan over education. Apparently (and not very surprisingly) this is the social narrative that I buy into the most strongly. But Dolan makes good points about how pushing a working-class kid to go to a middle-class or upper-class university can sever them from their friendship ties and emotional support network and force a really miserable adjustment, and it's not clear that the concrete benefits of education in their life are worth that. This would be even clearer if we hadn't started using college degree attainment as a credentialing system for many jobs that are not reliant on specialized education only attainable in college. (I'm looking at nearly the entire field of computing, for example.) Dolan goes farther than I would in arguing that no college education should be state-subsidized because it's inherently unfair for working-class people to be taxed to pay for middle-class educational structures. Still, I keep thinking back to this chapter during US political discussions about how important it is that we create some economic path for every US child to attend college. Is that really the correct public education policy? (See also Hofstadter's point in Anti-Intellectualism in American Life that some of the US obsession with college education is because, by comparison to Germany, our high-school and middle-school education is slow, relaxed, unchallenging, and insufficient.)

Altruistic and volitional may require a bit of additional explanation. Dolan's point with altruism is that we value a social narrative of giving for its own sake, without ego or reward. (I personally would trace this to Christianity; this was the interpretation of Matthew 6:6 that I was taught.) He argues that letting people show off their good deeds encourages more good deeds and helps others increases personal happiness. People who are more self-oriented in their motivations for volunteering stick with volunteer projects for longer. I thought of free software here, where self-interested reasons for volunteering are commonplace and accepted (scratching your own itch) rather than socially shunned, and considered part of the healthy texture of the community.

The chapter on volition recaps some of the evidence (which I've also seen in other books) that less of our life and our decisions stem from individual choice than we would like to think, and that some of our perception of free will is probably a cognitive illusion. Dolan isn't too interested in trying to undermine the reader's own sense of free will, but does want to undermine our belief in the free will of other people. His target here is the abiding political belief that other people get the life outcomes they deserve, and that poor people are poor because they're lazy or make bad choices. If we let go of the social narrative of volition and instead judge interventions solely by their results, we have fewer excuses to not collectively tackle problems and fewer justifications for negatively judging other people for their own misery.

I'm not sure I recommend this whole book. It's delightfully contrarian, but somewhat slim on new ideas (particularly if you've read broadly about happiness and life satisfaction) and heavy on studies that you should be somewhat dubious about. I'm still thinking about the chapter on education, though. How much you get out of it may depend on how many of Dolan's narratives you agree with going into the book.

Also, although I didn't discuss it in detail, mad props to Dolan for taking on the assumption that striving to be healthy is a life goal that should override happiness. We need a lot more questioning of that specific narrative.

Rating: 7 out of 10

Hideki Yamane: fontforge package update

10 November, 2019 - 09:50
I've uploaded fontforge package into experimental. It needs huge changes in debian packaging.

> $ git diff debian/1%20170731_dfsg-2 HEAD debian|wc -l
> 2565It'll help python2 removal since it provides python3-fontforge for building font packages.


Other work: follow several pacakges to latest upstream releases, fix Multi-Arch things and lintian warnings, add salsa-ci.yml and enable CI on salsa, etc.

Next: eliminate "repo in alioth" warning, more enabling salsa-ci, then digging bug reports.

Rodrigo Siqueira: Status Update and XDC 2019, October 2019

10 November, 2019 - 09:00

It has been a while since my last post, but there is a simple reason for that: on August 5th, I had to move from Brazil to Canada. Why did I move? Thanks to Harry Wentland recommendation, I got an interview for a software engineering position at AMD (Markham), and I got hired to work on the display team. From now on, I suppose that I’ll be around the DRM subsystem for a long time :). Even though I’m now employed by AMD this post reflect my personal thoughts only and should not be construed to represent AMD in any way.

I have a few updates about my work with the community since I have been busy with my relocation and adaptation. However, my main updates came from XDC 2019 [1] and I want to share it here.

XDC 2019 - Montréal (Concordia University Conference)

This year I had the great luck joining XDC again. However, at this time, I was with Harry Wentland, Nicholas Kazlauskas, and Leo Li (we worked together at AMD). We put effort into learning from other people’s experiences, and we tried to know what the compositor developers wanted to see in our driver. We also used this opportunity to try to explain a little bit more about our hardware features. In particular, we had conversations about Freesync, MST, DKMS, and so forth. Thinking of that, I’ll share my view of the most exciting moments that we had.

VKMS

As usual, I tried my best to understand what people want to see in VKMS soon or later. For example, from the XDC 2018, I focused on fixing some bugs but mainly in add writeback support cause it could provide a visual output (this work is almost done, see [2]). This year I collected feedback from multiple people (special thanks to Marten, Lyude, Hiler, and Harry), and from these conversations I tend to work in the following tasks:

  1. Finish the writeback feature and enable visual output;
  2. Add support for adaptive refresh rate;
  3. Add support for “dynamic connectors”, which can enable the MST test.

Additionally, Martin Peres gave a talk that he shared his views for the CI and test. In his presentation, he suggested using VKMS to validate the API, and I have to admit that I’m really excited about this idea. I hope that I can help with this.

Freesync

The amdgpu drivers support a technology named Freesync [3]. In a few words, this feature allows the dynamic change of the refreshes rate, which can bring benefits for games and for power saving. Harry Wayland talked about that feature and you can see it here:

Video 1: Freesync, Adaptive Sync & VRR

After Harry’s presentation, many people asked interesting questions related to this subject, this situation caught my attention, and for this reason, I added the VRR to my roadmap in the VKMS. Roman Gilg, from KDE, was one of the developers that asked for a protocol extension in Wayland for support Freesync; additionally, compositor developers asked for mechanisms that enable them to know in advance if the experience of a specific panel will be good or not. Finally, there were some discussions about the use of Freesync for power saving and in an application that requires time-sensitive.

IGT and CI

This year I kept my tradition of asking thousands of questions to Hiler with the goal of learning more about IGT, and as usual, he was extremely kind and gentle with my questions (thanks Hiler). One of the concepts that Hiler explained to me, it is the use of podman (https://podman.io/) for prepare IGT image, for example, after a few minutes of code pair with him I could run IGT on my machine after executing the following commands:

sudo su
podman run --privileged registry.freedesktop.org/drm/igt-gpu-tools/igt:master
podman run --privileged registry.freedesktop.org/drm/igt-gpu-tools/igt:master \
                        igt_runner -t core_auth
podman run --privileged registry.freedesktop.org/drm/igt-gpu-tools/igt:master \
                        igt_runner -t core_auth /tmp
podman run --privileged -v /tmp/results:/results \
  registry.freedesktop.org/drm/igt-gpu-tools/igt:master igt_runner -t core_auth /results

We also had a chance to discuss CI with Martin Peres, and he explained his work for improving the way that the CI keep track of bugs. In particular, he introduced a fantastic tool named cibuglog, which is responsible for keep tracking of test failures and using this data for building a database. Cibuglog has many helpful filters that enable us to see test problems associated with a specific machine and bugs in the Bugzilla. The huge insights from cibuglog it is the idea of using data for helping with bug tracking. Thanks Martin, for showing us this amazing tool.

Updates

I just want to finish this post with brief updates from my work with free software, starting with kw and finish with VKMS.

Kernel Workflow (kw)

When I started to work with VKMS, I wrote a tool named kworkflow, or simply kw, for helping me with basic tasks related to Kernel development. These days kw reborn to me since I was looking for a way to automate my work with amdgpu; as a result, I implemented the following features:

  • Kernel deploy in a target machine (any machine reachable via IP);
  • Module deploy;
  • Capture .config file from a target machine;

Unfortunately, the code is not ready for merging in the main branch, I’m working on it; I think that in a couple of weeks, I can release a new version with these features. If you want to know a little bit more about kw take a look at https://siqueira.tech/doc/kw/

VKMS

I was not working in VKMS due to my change of country; however, now I am reworking part of the IGT test related to writeback, and as soon as I finish it, I will try to upstream it again. I hope that I can also have the VKMS writeback merged into the drm-misc-next by the end of this month. Finally, I merged the prime supported implemented by Oleg Vasilev (huge thanks!).

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://xdc2019.x.org

[2] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[3] “FreeSync”. URL: https://en.wikipedia.org/wiki/FreeSync

Russ Allbery: Python dataclasses and typing

10 November, 2019 - 06:02

I'm going to preach the wonders of Python dataclasses, but for reasons of interested to those who have already gone down the delightful rabbit-hole of typed Python. So let me start with a quick plug for mypy if you haven't heard about it.

(Warning: this is going to be a bit long.)

Type Checking

mypy is a static type-checker for Python. In its simplest form, instead of writing:

def hello(name):
    return f"Hello, {name}"

you write:

def hello(name: str) -> str:
    return f"Hello, {name}"

The type annotations are ignored at runtime, but the mypy command makes use of them to do static typing. So, for instance:

$ cat > t.py
def hello(name: str) -> str:
    return f"Hello {name}"
hello(1)
$ mypy t.py
t.py:3: error: Argument 1 to "hello" has incompatible type "int"; expected "str"

If you're not already using this with your Python code, I cannot recommend it highly enough. It's somewhat tedious to add type annotations to existing code, particularly at first when you have to look up how mypy represents some of the more complicated constructs like decorators, but once you do, mypy starts finding bugs in your code like magic. And it's designed to work incrementally and tolerate untyped code, so you can start slow, and the more annotations you add, the more bugs it finds. mypy is much faster than a comprehensive test suite, so even if you would have found the bug in testing, you can iterate faster on changes. It can even be told which variables may be None and then warn you if you use them without checking for None in a context where None isn't allowed.

But mypy can only help with code that's typed, so once you get the religion, the next goal is to push typing ever deeper into complicated corners of your code.

Typed Data Structures

Python code often defaults to throwing any random collection of data into a dict. For a simple example, suppose you have a paginated list of strings (a list, an offset from the start of the list, a limit of the number of strings you want to see, and a total number of strings in the underlying data). In a lot of Python code, you'll see something like:

strings = {
    "data": ["foo", "bar", "baz"],
    "offset": 5,
    "limit": 3,
    "total": 10,
}

mypy is good, but it's not magical. It has no way to keep track of the fact that strings["data"] is a list of strings, but strings["offset"] is a int. Instead, it decides the type of each value is the superclass of the types it sees in the initializer (in this case, object, which provides almost no type checking).

There are two traditional solutions: an object, and a NamedTuple (the typing-enhanced version of collections.namedtuple). An object is tedious:

class Strings:
    def __init__(
        self, data: List[str], offset: int, limit: int, total: int
    ) -> None:
        self.data = data
        self.offset = offset
        self.limit = limit
        self.total = total

This provides perfect typing, but who wants to write all that. A NamedTuple is a little better:

Strings = NamedTuple(
    "Strings",
    [("data", List[str]), ("offset", int), ("limit", int), ("total", int)],
)

but still kind of tedious and has other quirks, such as the fact that your object can now be used as a tuple, which can introduce some surprising bugs.

Enter dataclasses, which are new in Python 3.7 (although inspired by attrs, which have been around for some time). The equivalent is:

@dataclass
class Strings:
    data: List[str]
    offset: int
    limit: int
    total: int

So much nicer, and the same correct typing. And unlike NamedTuple, dataclasses support default values, expansion via inheritance, and are full classes so you can attach short methods and do other neat tricks. You can also optionally mark them as frozen, which provides the NamedTuple behavior of making them immutable after creation.

A Detailed Example

Using dataclasses for those random accumulations of data is already great, but today I found a way to use them for a trickier typing problem.

I work on a medium-sized (about 75 routes) Tornado web UI using Jinja2 and WTForms for templating. Returning a page to the user's browser involves lots of code that looks something like this:

self.render("template.html", service=service, owner=owner)

Under the hood, this loads that template, builds a dictionary of template variables, and tells Jinja2 to render the template with those variables. The problem is the typing: the render method has no idea what sort of data you want to pass to a given template, so it uses the dreaded **kargs: Any, so you can pass anything you want. And mypy can't look inside Jinja2 template code.

Forget to pass in owner? Exception or silent failure during template rendering depending on your Jinja2 options. Pass in the name of a service when the template was expecting a rich object? Exception or silent failure. Typo in the name of the parameter? Exception or silent failure. Better hope your test suite is thorough.

What I did today was wrap each template in a dataclass:

@dataclass
class Template(BaseTemplate):
    service: str
    owner: str
    template: InitVar[str] = "template.html"

Now, the code to render it looks like:

template = Template(service, owner)
self.finish(template.render(self))

and now I have type-checking of all of the template arguments and only need to ensure the dataclass definition matches the needs of the template implementation.

The magic happens in the base template class:

@dataclass
class BaseTemplate:
    def render(self, handler: RequestHandler) -> str:
        template_name = getattr(self, "template")
        template = handler.environment.get_template(template_name)
        namespace = handler.get_template_namespace()
        for field in fields(self):
            namespace[field.name] = getattr(self, field.name)
        return template.render(namespace)

(My actual code is a bit different and more complicated since I move some other template setup to the render() method.)

There's some magic here to work around dataclass limitations that warrants some explanation.

I pass the Tornado handler class into the template render() method so that I have access to the template environment and the (overridden) Tornado get_template_namespace() call to get default variable settings. Passing them into the dataclass constructor would make the code less clean and is harder to implement, mostly due to limitations on attributes with default values, mentioned below.

The name of the template file should be a property of the template definition rather than something the caller needs to know, but that means it has to be given last since dataclasses require that all attributes without default values come before ones with default values. That in turn also means that the template attribute cannot be defined in BaseTemplate, even without a default value, because if a child class sets a default value, @dataclass then objects. Hence the getattr to hide from mypy the fact that I'm breaking type rules and assuming all child classes are well-behaved.

template in the child classes is marked as InitVar so that it won't be included in the fields of the dataclass and thus won't be passed down to Jinja2.

Finally, it would be nice to be able to use dataclasses.asdict() to turn the object into a dictionary for passing into Jinja2, but unfortunately asdict tries to do a deep copy of all template attributes, which causes all sorts of problems. I want to pass functions and WTForms form objects into Jinja2, which resulted in asdict throwing all sorts of obscure exceptions. Hence the two lines that walk through the fields and add a shallow copy of each field to the template namespace.

I've only converted four templates so far (this code base is littered with half-finished transitions to better ways of doing things that I try to make forward progress on when I can), but I'm already so much happier. All sorts of obscure template problems will now be caught at mypy even before needing to run the test suite.

Dirk Eddelbuettel: Rcpp 1.0.3: More Spit and Polish

9 November, 2019 - 21:06

The third maintenance release 1.0.3 of Rcpp, following up on the 10th anniversary and the 1.0.0. release both pretty much exactly one year ago, arrived on CRAN yesterday. This deserves a special shoutout to Uwe Ligges who was even more proactive and helpful than usual. Rcpp is a somewhat complex package with many reverse dependencies, and both the initial check tickles one (grandfathered) NOTE, and the reverse dependencies typically invoke a few false positives too. And in both cases did he move the process along before I even got around to replying to the auto-generated emails. So just a few hours passed between my upload, and the Thanks, on its way to CRAN email—truly excellent work of the CRAN team. Windows and macOS binaries are presumably being built now. The corresponding Debian package was also uploaded as a source package, and binaries have since been built.

Just like for Rcpp 1.0.1 and Rcpp 1.0.2, we have a four month gap between releases which seems appropriate given both the changes still being made (see below) and the relative stability of Rcpp. It still takes work to release this as we run multiple extensive sets of reverse dependency checks so maybe one day we will switch to six month cycle. For now, four months seem like a good pace.

Rcpp has become the most popular way of enhancing R with C or C++ code. As of today, 1832 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with 190 in BioConductor. And per the (partial) logs of CRAN downloads, we are currently running at 1.1 millions downloads per month.

This release features a number of different pull requests by five different contributors as detailed below.

Changes in Rcpp version 1.0.3 (2019-11-08)
  • Changes in Rcpp API:

    • Compilation can be sped up by skipping Modules headers via a toggle RCPP_NO_MODULES (Kevin in #995 for #993).

    • Compilation can be sped up by toggling RCPP_NO_RTTI which implies RCPP_NO_MODULES (Dirk in #998 fixing #997).

    • XPtr tags are now preserved in as<> (Stephen Wade in #1003 fixing #986, plus Dirk in #1012).

    • A few more temporary allocations are now protected from garbage collection (Romain Francois in #1010, and Dirk in #1011).

  • Changes in Rcpp Modules:

    • Improved initialization via explicit Rcpp:: prefix (Riccardo Porreca in #980).
  • Changes in Rcpp Deployment:

    • A unit test for Rcpp Class exposure was updated to not fail under r-devel (Dirk in #1008 fixing #1006).
  • Changes in Rcpp Documentation:

    • The Rcpp-modules vignette received a major review and edit (Riccardo Porreca in #982).

    • Minor whitespace alignments and edits were made in three vignettes following the new pinp release (Dirk).

    • New badges for DOI and CRAN and BioConductor reverse dependencies have been added to README.md (Dirk).

    • Vignettes are now included pre-made (Dirk in #1005 addressing #1004)).

    • The Rcpp FAQ has two new entries on 'no modules / no rtti' and exceptions across shared libraries (Dirk in #1009).

Thanks to CRANberries, you can also look at a diff to the previous release. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues); questions are also welcome under rcpp tag at StackOverflow which also allows searching among the (currently) 2255 previous questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Utkarsh Gupta: Debian Activities for October 2019

9 November, 2019 - 07:02

Here’s my (first) monthly update about the activities I’ve done in Debian this October.

Debian LTS

This was my first month as a Debian LTS paid contributor.
I was assigned 10 hours and worked on the following things:

CVE Fixes and Announcements:
  • Issued DLA 1948-1, fixing CVE-2019-13574, for ruby-mini-magick.
    Details here:

    In lib/mini_magick/image.rb in ruby-mini-magick, a fetched remote image filename could cause remote command execution because Image.open input is directly passed to Kernel#open, which accepts a pipe character followed by a command.

    For Debian 8 “Jessie”, this has been fixed in 3.8.1-1+deb8u1.

  • Issued DLA 1961-1, fixing CVE-2019-14464, CVE-2019-14496, and CVE-2019-14497, for milkytracker.
    Details here:

    XMFile::read in XMFile.cpp in milkyplay in MilkyTracker had a heap-based buffer overflow.
    LoaderXM::load in LoaderXM.cpp in milkyplay in MilkyTracker had a stack-based buffer overflow.
    ModuleEditor::convertInstrument in tracker/ModuleEditor.cpp in MilkyTracker had a heap-based buffer overflow.

    For Debian 8 “Jessie”, this has been fixed in 0.90.85+dfsg-2.2+deb8u1.
    Furthermore, sent a patch to the maintainer, James, for fixing the same in Bullseye, Sid. Commit here. Fixed in 1.02.00+dfsg-2.

  • Issued DLA 1962-1, fixing CVE-2017-18638, for graphite-web.
    Details here:

    The “send_email” function in graphite-web/webapp/graphite/composer/views.py in Graphite is vulnerable to SSRF. The vulnerable SSRF endpoint can be used by an attacker to have the Graphite web server request any resource. The response to this SSRF request is encoded into an image file and then sent to an e-mail address that can be supplied by the attacker. Thus, an attacker can exfiltrate any information.

For Debian 8 “Jessie”, this has been fixed in 0.9.12+debian-6+deb8u1.
Furthermore, sent a patch to the maintainer, Zigo, for fixing the same in Bullseye, Sid. Commit here. Fixed in 1.1.4-5.
Also, sent a patch to the Security Team for fixing the same in Buster, but uploaded by Zigo himself. Commit here. Fixed in 1.1.4-3+deb10u1.

Miscellaneous:
  • Actually fix CVE-2019-11027 upstream for ruby-openid. Pull request here.
    Whilst this has been merged and released as v2.9.2, there are other login problems, as reported here.

  • Discuss with LTS Team Members about best practices for CVE-2019-11027 for ruby-openid’s actual fix. Thread here.

  • Triage Ansible for CVE-2019-14846 (which seems to be an easy fix) and CVE-2019-14858 (this kinda looks unaffected for Jessie, but not sure yet).

Debian Uploads New Upstream Version:
  • ruby-fog-aws ~ 3.5.2-1.
  • librole-tiny-perl ~ 2.001001-1.
  • gitlab ~ 12.1.14-1 (to experimental).
  • libmail-box-perl ~ 3.008-1.
  • ruby-invisible-captcha ~ 0.12.2-1.
  • ruby-gnome ~ 3.4.0-1.
  • gitlab-shell ~ 9.3.0+dfsg-1 (to experimental).
  • gitlab-workhorse ~ 8.8.1+debian-1 (to experimental).
  • gitaly ~ 1.59.3+dfsg-1.
  • python-marshmallow-sqlalchemy ~ 0.19.0-1.
  • gitlab ~ 12.2.9-1 (to experimental).
Bug Fixes:
  • #942125 for ruby-invisible-captcha.
  • #941795 for ruby-gnome.
  • #940352 for golang-github-davecgh-go-spew.
  • #942456 for python-flask-marshmallow.
  • Autopkgtest failure for python-flask-marshmallow.
  • CVE-2019-{18446 to 18463} for gitlab.
Reviews and Sponsored Uploads:
  • node-d3-geo ~ 1.11.6-1 for Abhijith Sheheer.
  • d3-format ~ 1:1.4.1-2 for Samyak Jain.
  • Reviewed node-regex-cache for Abhijith Sheheer.
  • Reviewed node-ansi-align for Abhijith Sheheer.
  • Reviewed node-color-name for Priyanka Saggu.
  • Reviewed node-webpack for Priyanka Saggu.
Fasttrack Repo (fasttrack.debian.net):
  • Uploaded and ACCEPTED gitlab.
  • Uploaded and ACCEPTED ruby-jwt for Nilesh Patra.
  • Uploaded and ACCEPTED ruby-gitlab-sidekiq-fetcher.
  • Uploaded and ACCEPTED ruby-fog-aws for Samyak Jain.

Until next time.
:wq for today.

Shirish Agarwal: August Landmesser, a photograph, twitter and move to mastadon

8 November, 2019 - 05:53

After last 2-3 nights of writing that blog post, was thinking that would be taking nice and easy for a while. But unfortunately, events have a habit overtaking the best of plans. Hopefully, it won’t be too long. It all started innocently enough. A gentleman, a Senior advocate who advocates in the Supreme Court, Dr. Sanjay Hegde had the following cover on his twitter account/handle.

August Landmasser – source – twitter, wikipedia, theprint.in

Now, the photograph is about a gentleman called August Landmesser, a german national who according to Wikipedia was imprisoned, eventually drafted into penal military service and eventually killed in action according to Wikipedia . This erupted as a row in twitter as the gentleman while known for his anti-establishment views has been in all aspects a gentleman on twitter. His twitter account was suspended under the view of ‘hateful imagery’ . While one could argue that it was done right, but he was not only the only one, over the last several days, lot of people on the left-side of the spectrum, sane voices have been suspended while some twitters even after giving rape or death threats on twitter from the right, no action has been taken.

So two things happened, while Advocate Hegde was reinstated over the hue and cry, he put the cover back up and was again suspended and now has served a notice to Twitter Inc. where the senior counsel is being represented by Mr. Panjal Kishore. While I don’t want to get into the legal notice itself, I would say it makes for some pretty interesting reading and makes some very valid points. The poem of poet Gorakh Pandey in its english translation provides icing on the cake. The counsel representing Dr. Hegde also points to constitutional law and previous judgements as well as references Alexander Meiklejohn and some of the statements he made in his work ‘Political Freedom’ . The notice also reminds about Article 19 (1) (a) which ensures each person the right to free speech while restraining the Govt. The gentleman also goes on to talk about censorship and its practise and asks the courts to direct twitter Inc. to unblock him while at the same time issue some guidelines which follow both in spirit and form what Article 19 was all about.

While one could argue about the merits of the case either way, it at least to my knowledge is one of the very rare detailed notice against a social media company talking about social liberties in such a manner. I, unreservadly support the notice because the notice is about democracy and ensuring voices, even of those with whom you might agree or disagree. How serious does the Supreme Court take this and what sort of action or inaction will the Govt. take will tell where the Government is heading. There are some people who feel the judiciary is under influence of the current regime but there are still others like Dr. Hegde who still seek to challenge it. If anything, have to give him points for fighting the good fight.

Movement from Twitter to mastadon

Because of the hue and cry, quite a bit of people have started moving from twitter to mastadon. People, by default don’t like toxic situations. People enjoy constructive criticism and different ways of looking at things but don’t enjoy death threats etc. from anonymous handles which had been happening. So what started with a small wave is steadily turning into a stream, people are moving from twitter to mastadon. While I’m sort of have been on mastadon for quite sometime, it was nice to see an influx of lot of reporters, journalists, constitutional lawyers, techies as well as well as just social media junkies turning to mastadon.social and other masto related sites.

The altnews founder Pratik Sinha made a thread on twitter where he asked people to ask questions. While, many of the questions were easy, many of them were also thought-provoking. For instance, the idea that couldn’t a federated system be gamed. While I don’t see how, I have enough faith in human ingenuinity to concede that any system can be gamed. We know that all and any software will have bugs and security issues. Add to that lot of such new media may use either derivatives or patches to library or libraries which have not been tested enough on security or tested in variety of combinations. But this is not a new issue but an age-old issue. I did share mastadon issue tracker . There were lot of conspirancy theories, some real, some fanciful and some portending how the rich and powerful could take over. All of which are possible, I had to admit. I did enjoy sharing https://instances.social/ which helps people in choosing instances. One of the big changes which I wasn’t aware of but became aware of due to somebody named Neha was that now it’s possible to export identities, followers, blocks from one instance to another. This at least, IMHO is a big boost to the features and ability in improvement in the federated structures. Of course, the more functionality we have, the more we want . Somebody also shared a cross-posting utility but seems to only work with mastadon and not any of it derivatives, at least not the network I’m on atm. I also came to know of another mastadon instance where a single toot can be of 1111 letters. For people asking about the cons, the biggest might be security. For instance, if I were to use the cross-posting uility and somehow it is breached, then possibility of two accounts is possible, not just one. Something to consider. Still though, all in all, a very productive evening

Shirish Agarwal: A tale of unfortunate coincidences and incidents

7 November, 2019 - 15:40

This would be a biggie so please relax as this is going to be a really long one. So make sure you have something to chill out with.

People talk about ‘ease of doing business’ or what not for quite sometime. From last few months, there has been a slow-down and due to certain issues I had to make some purchases. This blog post will share the travails from a consumer perspective only as the horrors from the other side are also well-known But for this blog post, will share the series of unfortunate coincidences and incidents which happened with me.

APC UPS

For quite few months, my old UPS had been giving issues hence I decided to see what is available in the market. As my requirements are small, two desktops and a laptop sometimes, I settled on APC BR1000G-IN . Instead of buying it from Amazon I decided to try my luck at the local vendors. None of them had this specific UPS which I wanted. Before I continue I want to share the trivia that I had known that Schiedner Electric was buying APC the last time I bought an APC UPS. That was the big news at the time, 2007. At that time when I bought the APC UPS I was given 5 years warranty and the UPS worked till 7-8 years so was pretty happy with it. I also had a local brand I used which worked but didn’t offer anything special such as LED interface on the UPS.

Anyways, with those factors in mind, I went to the APC site and saw the partner list and saw that they were something like 20-25 odd partners in my city, Pune. I sent one or two e-mails to each of the partners of APC, while some were generic with my requirements while in others I was more specific in what I want. I was hoping that at least a few would respond. To my horror, most e-mails bounced back and some were in black-hole which means no one answered. I even tried calling some of the numbers given of the partners and even they turned out to be either fake or not working, which of the two it is I don’t know.

Luckily, I have been to Mumbai quite often and have few contacts with some people who are into IT sales. One of them, had numbers of some of the people in Mumbai, one of which worked, who in turn directed or shared number of a vendor in Pune. One thing led to another and soon I had BR1000G-IN at my place. Sadly, the whole process was draining and took almost a week to get it at my end. I read the user guide fully, did all the connections and found that the the start/reset button of the UPS is depressed, it doesn’t connect well with my stubby fingers. I asked mother and even she was not able to push the button.

APC Back-UPS Pro 1000 Copyright – APC

As can be seen while it is at the upper corner when trying to on it and do the required settings, it just wouldn’t work. I was not sure whether it was the UPS at fault or my stubby finger at fault. As shared, even my mom was not able to push it. After trying for a day or two, turned to the vendor, had to escalate it up and finally was assigned an engineer who came to check it. When you buy a unit for around INR 10k/- this is the least they can do. Somehow his finger was able to penetrate. For both mum and me, the start/reset button was a design fail. While I can understand it why the original design might have been so, I am sure lot of people like me would have problems with it. Coincidentally, the engineer was from one of the vendors whom I had e-mailed earlier. I showed him the e-mail I had sent to the firm. Sadly, till date the e-mail address hasn’t been corrected. vishal@modularelect.com is still a blackhole while I was told this will be corrected soon but two months and it has still not been corrected

Sadly, my problems still continue somewhat. While I’m thankful to whoever the apcupsd Debian wiki page for some reason it doesn’t work for me. While I have asked the good people at apcupsd-users mailing list I am hopeful will get an answer in a day or two. The good part is that the cable at least is working and giving some status information as shared. Hopefully, it will be some minor thing rather than something major but that only time will tell.

Another bit of trivia – While some of you may have known, some might not, I was also looking if APC bought out the Li-Ion batteries too. As fate would have it, the date I bought the UPS, the same date the Li-Ion batteries were launched or shown on youtube.

While it probably will take a few years, would be looking forward to it. There is also possibility of supercapacitors as well but that is well into the future.

Cooler Master G650M

I remember writing about having a better SMPS about a year back. At the time I was having power problems I also thought it would be prudent to change the SMPS as well. While it didn’t take me much time, while I was looking for 700-750W SMPS I finally had to settle for Cooler Master G650M Bronze SMPS as that was the one which was available. There were others too, 1000W ones (gold) but out of my price range. Even the 650W SMPS costed me around INR 7.5k. This also worked for few months and then shorted out. I was highly surprised when the SMPS conked out, as the warranty is supposed to run for five whole years. While buying I had checked the labelling and it was packed only couple of months before purchase, so not that old. What is most peculiar is that now that product is no longer in the market and in fact had been replaced by Cooler Master MWE Bronze 650 which has 3 year warranty . Why is it so is beyond me. Usually Products which have 5 year warranty or more are usually in the market for a much longer time. Unlike other brands, Cooler Master doesn’t believe in repair but offer replacement but takes anywhere between 2 to 3 weeks which I didn’t knew at the time of purchase

Just to be clear, I wasn’t sure what was wrong. I had bought the ASUS Prime Z270-P motherboard which has LED lighting all around it. I have blogged about it before last year in the same blog post above. What was peculiar that the stock fan above the CPU was not running and nor the cabinet power button, although rest of the motherboard it was showing power so it was peculiar as to what could the problem might be. I have an old voltage detector, something like this from which I could ascertain that I was getting power at points but still couldn’t figure out what was wrong. I did have the old stock SMPS but as have shared before it has lot lesser wattage, says 400 on the label but probably more like 300-325 watts. I removed few of the components from the system before taking it the vendor so it would make it easier for the vendor to tell what is wrong. I assumed that it most probably might be the switch as I usually use reboot all the time whenever I feel the need for reboot, usually after I have some updates and need to refresh my session. The vendor was able to diagnoze within few minutes that the fault lay in the SMPS and not in the switch or anywhere else and asked me to take the unit to the service center for RMA.

While I sent it for RMA, thought could survive for the required time without any internet. But I was wrong. As news in most news channels in India is so stale and biased I found it unbearable to be without news in 2-3 days. I again wondered how people in Kashmir would be without all the facilities that we have.

GRUB 2 missing, UEFI and supergrub2

I went back to the vendor with my old stock SMPS and it worked but found that grub2 menu was missing. It was just plain booting to windows 10. I started a thread at debian-user trying to figure out if there was some issue at my end, maybe some grub variable had got lost or something but the responses seemed to suggest that something else had happened. I also read through some of the UEFI documentation on wikipedia and web, I didn’t go to much depth as that would have been distracting as the specification itself is evolving and is subject to change. I did find some interesting bits and pieces but that is for a later date perhaps. One of the things I remembered from my previous run-ins with grub2 issues is that supergrub2 had been immensely useful. Sadly though, the version which I tried as stable was dumping me to grub rescue instead of the grub menu when I used the ISO image on a thumb drive. I could have tried to make a go for it but was too lazy. On an off-chance I looked at supergrub2 support and did find that somebody else also have had the same exact issue and it was reported. I chimed in and tried one of the beta versions and it worked which made me breathe easier. After getting into debian, I tried the old $ sudo update-grub which usually fixed the issues. I again tried to boot without using the help of the usb disk but failed as it again booted me into MS-Windows environment.

Firmware update

I dunno how or where or how I thought it might be a firmware issue. While trawling via the web I had come across issues which stated similar issues especially either with dual-booting or multi-booting and firmware was one of the issues which was found. Apart from waiting 2 weeks and then perhaps getting a hdd I had no other option than to update the firmware.

Using inxi I was able to get firmware details which I also shared in the girhub page before update.

$ inxi -M Machine: Type: Desktop Mobo: ASUSTeK model: PRIME Z270-P v: Rev X.0x serial: UEFI: American Megatrends v: 0808 date: 06/16/2017

I would ask you to look at the version number and the date. I used Asus’s EZ update utility and downloaded the new UEFI BIOS .pcap file . In EZ update, I just had to give the correct path and couple of restarts later I had the new version of UEFI BIOS as can be seen below.

Asus UEFI BIOS 1205 Copyright – Asus

One thing to note is that there is no unix way that I least I know of updating an UEFI BIOS. If anybody knows, please let me know. I did look for ‘unix .pcacp update’ but most of the tutorials I found out were for network packet sniffing rather than UEFI BIOS update. Maybe it’s an Asus issue. Does anybody know or can point to something ?

The update fixed the EFI code which was also not appearing, as can be seen now via efibootmgr.

$ efibootmgr
BootCurrent: 0004
Timeout: 1 seconds
BootOrder: 0004,0000,0003,0001,0002,0005
Boot0000* Windows Boot Manager
Boot0001* UEFI:CD/DVD Drive
Boot0002* UEFI:Removable DeviceBoot
0003* Hard Drive
Boot0004* debian
Boot0005* UEFI:Network Device

While I’m not going to go into more details but this should be enough –

$ efibootmgr -v | grep debianBoot0004* debian HD(2,GPT,xxxxx)/File(\EFI\DEBIAN\GRUBX64.EFI)..BO

I have hidden some details for privacy sake such as address space as well as GPT hash etc. Finally, Grub 2 menu came to me in all its loveliness –

Grub 2.04-3 debian-edu CC-SA-3.0

There are still some things I wanna fix through, for instance I hope to help out adrian in testing some of his code. I wasn’t able to do because nowadays we get cheap multi-level cell, see for e.g. this paper. I might have mentioned before.

Ending Notes – Powershell

To end I did try to make home even in MS-Windows but the usefulness of shell far outlives anything that is on MS-Windows. I used powershell 5 and even downloaded and installed powershell 6 and even manage to figure out how to get quite some of the utilities to behave similar to how they behave under GNU/Linux but still the Windowing environment itself was much of an irritant than anything else. One of the biggest letdowns was not being able to use touch. While somebody made a powershell module for it but it still needs to be imported for every session. While I’m sure I could have written a small script for the same, my time was beneficial in finding the solution to it. As shared I also learnt about UEFI a bit in the process. Sharing screenshots of powershell 5 and 6 .

Powershell 5 Copyright – Microsoft Corporation Powershell-6 Copyright – Microsoft Corporation

Conclusion – While it doesn’t even cover probably even a quarter of the issues or use-cases but even if one person finds it useful, that is good enough. I have taken lot of shortcuts and not shared whole lot otherwise this would have been lot longer. One of the things I forgot to mention is that I did find some mentions of MS-Windows overwriting, this was in October 2018 as well as October 2019 Security Updates as well. How much to trust the issues that people posted don’t really know.

Gunnar Wolf: Made with Creative Commons ⇒ Hecho con Creative Commons. To the printer!

7 November, 2019 - 06:09

I am very happy to tell you that, around 2.5 years after starting the translation project, today we sent to the presses the Spanish translation for Creative Commons' book, Made with Creative Commons!

This has been quite a feat, on many fronts — Social, technical, organizational. Of course, the book is freely redistributable, and you can get it at IIEc-UNAM's repository.

As we are producing this book from DocBook sources, we will also be publishing an EPUB version. Only... We need to clear some processes first (i.e. having the right department approve it, get a matching ISBN record, etc.) and will probably only be done by early next year. Of course, you can clone our git repository and build it at home :-]

Of course, I cannot celebrate until the boxes of brand new books land in my greedy hands... But it will happen soon™.

Reproducible Builds: Reproducible Builds in October 2019

7 November, 2019 - 01:25

Welcome to the October 2019 report from the Reproducible Builds project. 👌

In our monthly reports we attempt outline the most important things that we have been up to recently. As a reminder on what our little project is all about, whilst anyone can inspect the source code of free software for malicious changes most software is distributed to end users or servers as precompiled binaries. Reproducible builds tries to ensure that no changes have been made during these compilation processes by promising identical results are always generated from a given source, allowing multiple third-parties to come to a consensus on whether a build was compromised.

In this month’s report, we will cover:

  • Media coverage & conferencesReproducible builds in Belfast & science
  • Reproducible Builds Summit 2019Registration & attendees, etc.
  • Distribution workThe latest work in Debian, OpenWrt, openSUSE, and more…
  • Software developmentMore diffoscope development, etc.
  • Getting in touchHow to contribute & get in touch

If you are interested in contributing to our venture, please visit our Contribute page on our website.

Media coverage & conferences

Jonathan McDowell gave an introduction on Reproducible Builds in Debian at the Belfast Linux User Group:

Whilst not strictly related to reproducible builds, Sean Gallagher from Ars Technica wrote an article entitled Researchers find bug in Python script may have affected hundreds of studies:

A programming error in a set of Python scripts commonly used for computational analysis of chemistry data returned varying results based on which operating system they were run on.

Reproducible Builds Summit 2019

Registration for our fifth annual Reproducible Builds summit that will take place between the 1st and 8th December in Marrakesh, Morocco has opened and invitations have been sent out.

Similar to previous incarnations of the event, the heart of the workshop will be three days of moderated sessions with surrounding “hacking” days and will include a huge diversity of participants from Arch Linux, coreboot, Debian, F-Droid, GNU Guix, Google, Huawei, in-toto, MirageOS, NYU, openSUSE, OpenWrt, Tails, Tor Project and many more. We are still seeking additional sponsorship for the event. Sponsoring enables us to enable the attendance of people who would not otherwise be able to attend. If you or your company would be able to sponsor the event, please contact info@reproducible-builds.org.

If you would like to learn more about the event and how to register, please visit our our dedicated event page.

Distribution work

GNU Guix announced that they had significantly reduced the size of their “bootstrap seed” by replacing binutils, GCC and glibc with smaller alternatives resulting in the package manager “possessing a formal description of how to build all underlying software” in a reproducible way from a mere 120MB seed.

OpenWrt is a Linux-based operating system targeting wireless network routers and other embedded devices. This month Paul Spooren (aparcar) posted a patch to their mailing list adding KCFLAGS to the kernel build flags to make it easier to rebuild the official binaries.

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution which describes how rpm was updated to run most builds with the -flto=auto argument, saving mirror disk space/bandwidth. In addition, maven-javadoc-plugin received a toolchain patch (originating from Debian) in order to normalise a date.

Debian

In Debian this month Didier Raboud (OdyX) started a discussion on the debian-devel mailing list regarding building Debian source packages in a reproducible manner (thread index). In addition, Lukas Pühringer prepared an upload of in-toto, a framework to protect supply chain integrity by the Secure Systems Lab at New York University which was uploaded by Holger Levsen.

Holger Levsen started a new section on the Debian wiki to centralise to document the progress made on various Debian-specific reproducibility issues and noticed that the “essential” package set in the bullseye distribution became unreproducible again, likely due to a a bug in Perl itself. Holger also restarted a discussion on Debian bug #774415 which requests that the devscripts collection of utilities that “make the life of a Debian package maintainer easier” adds a script/wrapper to enable easier end-user testing of whether a package is reproducible.

Johannes Schauer (josch) explained that their mmdebstrap tool can create bit-for-bit identical Debian chroots of the unstable and buster distributions for both the essential and minbase bootstrap “variants”, and Bernhard M. Wiedemann contributed to a discussion regarding adding a “global” build switch to enable/disable Profile-Guided Optimisation (PGO) and Link-time optimisation in the dpkg-buildflags tool, nothing that “overall it is still very hard to get reproducible builds with PGO enabled.”

64 reviews of Debian packages were added, 10 were updated and 35 were removed this month adding to our knowledge about identified issues. Three new types were added by Chris Lamb (lamby): nondeterministic_output_in_code_generated_by_ros_genpy, nondeterministic_ordering_in_include_graphs_generated_by_doxygen & nondeterministic_defaults_in_documentation_generated_by_python_traitlets.

Lastly, there was a far-reaching discussion regarding the correctness and suitability of setting the TZ environment variable to UTC when it was noted that the value UTC0 was “technically” more correct.

Software development Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Lastly, a request from Steven Engler to sort fields in the PKG-INFO files generated by the setuptools Python module build utilities was resolved by Jason R. Coombs and Vagrant Cascadian added SOURCE_DATE_EPOCH support to LTSP’s manual page generation.

strip-nondeterminism & reprotest

strip-nondeterminism is our tool to remove specific non-deterministic results from successful builds. This month, Chris Lamb made a number of changes including uploading version 1.6.1-1 was to Debian unstable. This dropped the bug_803503.zip test fixture as it is no longer compatible with the latest version of Perl’s Archive::Zip module (#940973).

reprotest is our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences. This month, Iñaki Malerba updated our Salsa CI scripts [] as well as adding a --control-build parameter []. Holger Levsen uploaded the package as 0.7.10, bumping the Debian “standards version” to 4.4.1 [].

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb (lamby) made the following changes, including uploading versions 126, 127, 128 and 129 to the Debian unstable distribution:

  • Disassembling and reporting on files related to the R (programming language):

    • Expose an .rdb file’s absolute paths in the semantic/human-readable output, not hidden deep in a hexdump. []
    • Rework and refactor the handling of .rdb files with respect to locating the parallel .rdx prior to inspecting the file to ensure that we do not add files to the user’s filesystem in the case of directly comparing two .rdb files or — worse — overwriting a file in is place. []
    • Query the container for the full path of the parallel .rdx file to the .rdb file as well as looking in the same directory. This ensures that comparing two Debian packages shows any varying path. []
    • Correct the matching of .rds files by also detecting newer versions of this file format. []
    • Don’t read the site and user environment when comparing .rdx, .rdb or .rds files by using Rscript’s --vanilla option. [][]
    • Ensure all object names are displayed, including ones beginning with a fullstop (.) [] and sort package fields when dumping data from .rdb files [].
    • Mask/hide standard error when processing .rdb files [] and don’t include useless/misleading NULL when dumping data from them. []
    • Format package contents as foo = bar rather than using ugly and misleading brackets, etc. [] and include the object’s type [].
    • Don’t pass our long script to parse .rdb files via the command line; use standard input instead. []
    • Call the deparse function to ensure that we do not error out and revert to a binary diff when processing .rdb files with internal “vector” types; they do not automatically coerce to strings. []
    • Other misc/cosmetic changes. [][][]
  • Output/logging:
    • When printing an error from a command, format the command for the user. []
    • Truncate very long command lines when displaying them as an external source of data. []
    • When formatting command lines ensure newlines and other metacharacters appear escaped as \n, etc. [][]
    • When displaying the standard error from commands, ensure we use the escaped version. []
    • Use “exit code” over “return code” terminology when referring to UNIX error codes in displayed differences. []
  • Internal API:
    • Add ability to pass bytestring input to external commands. []
    • Split out command-line formatting into a separate utility function. []
    • Add support for easily masking the standard error of commands. [][]
    • To match the libarchive container, raise a KeyError exception if we request an invalid member from a directory. []
    • Correct string representation output in the traceback when we cannot locate a specific item in a container. []
  • Misc:
    • Move build-dependency on python-argcomplete to its Python 3 equivalent to facilitate Python 2.x removal. (#942967)
    • Track and report on missing Python modules. (#72)
    • Move from deprecated $ADTTMP to $AUTOPKGTEST_TMP in the autopkgtests. []
    • Truncate the tcpdump expected diff to 8KB (from ~600KB). []
    • Try and ensure that new test data files are generated dynamically, ie. at least no new ones are added without “good” reasons. []
    • Drop unused BASE_DIR global in the tests. []

In addition, Mattia Rizzolo updated our tests to run against all supported Python versions [] and to exit with a UNIX exit status of 2 instead of 1 in case of running out of disk space []. Lastly Vagrant Cascadian updated diffoscope 126 and 129 in GNU Guix, and updated inputs for additional test suite coverage.

trydiffoscope is the web-based version of diffoscope and this month Chris Lamb migrated the tool to depend on the python3-docutils package over python-docutils to allow for Python 2.x removal (#943293) as well as updating the packaging to the latest Debian standards and conventions [][][].

Project website

There was yet more effort put into our our website this month, including Chris Lamb improving the formatting of reports  [][][][][] and tidying the new “Testing framework” links [], etc.

In addition, Holger Levsen add the Tor Project’s Reproducible Builds Manager to our “Who is Involved?” page and Mattia Rizzolo dropped a literal <br> HTML element [].

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. This month, the following changes were made:

  • Holger Levsen:
    • Debian-specific changes:
      • Add a script to ease powercycling x86 and arm64 nodes. [][]
      • Don’t create suite-based directories for buildinfos.debian.net. []
      • Make all four suites being tested shown in a single row on the performance page. []
    • OpenWrt changes:
      • Only run jobs every third day. []
      • Create jobs to run the reproducible_openwrt_rebuild.py script today and in the future. []
  • Mattia Rizzolo:
    • Add some packages that were lost while updating to buster. []
    • Fix the auto-offline functionality by checking the content of the permalinks file instead of following the lastSuccessfulBuild that no longer being updated. []
  • Paul Spooren (OpenWrt):
    • Add a reproducible_common utilities file. []
    • Update the openwrt-rebuild script to to use schroot. []
    • Use unbuffered Python output [] as well as fixing newlines [][]

The usual node maintenance was performed by Holger Levsen [][], Mattia Rizzolo [][][] and Vagrant Cascadian [][][].

Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:


This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levesen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

Martina Ferrari: IkiWiki

6 November, 2019 - 18:57

I haven't posted in a very long time. Not only because I suck at this, but also because IkiWiki decided to stop working with OpenID, so I can't use the web interface any more to post.. Very annoying.

Already spent a good deal of time trying to find a solution, without any success.. I really don't want to migrate to another software again, but this is becoming a showstopper for me.

Martina Ferrari: Fun with the Linux desktop

6 November, 2019 - 18:57

Or, "Why 2014 will NOT be the year of Linux in the desktop".

So, it happens that my mum (66 yo) has been a Debian user for over a year now. With highs and lows, she manages to do what she needs; sometimes I need to intervene.

Today I thought I could send her a quick email explaining how to download using BitTorrent, because of reasons. So, as I was writing, I realised that in many torrent sites, you only get a magnet link these days. No problem! Click on the magnet link, at it should work automagically.

Then I remembered: it works on my computer, because I've spent a couple of hours some time ago researching how to make FireFox work with magnet links, creating a custom script, etc. I hoped that by now this must have been solved, at least in Debian unstable.

Wrong again. I created a new user in my computer, launched IceWeasel/FireFox and boom: I get a dialog asking me to select a program, not from a list of desktop applications, taken from one of the gazillion sources where applications are defined, but just from any place on the file system! (At least, now you don't need to go tweaking with the hidden FireFox configuration editor).

I was very angry at the brainiac at Mozilla who thought it was a great idea to ignore the host system and do their own MIMEtype handling. And then, tried Chromium to see what would happen... And I get first a scary message telling me that it is going to use the super-obscure xdg-open program to open my link, and that it could harm my computer! It was followed by another very helpful dialog telling me something like:

Unable to detect the URI-scheme of "magnet:?xt=urn:btih:diePh6iengei4quaep4shai8ahshahnae9 oolahtetheir2bohmu1eelaChui1ohdahruegh4wief6PusahDae4ho oshahjoogai7bae9shuvei9shufeX4boog8neichi3OoDee5ei9Uori c6aingairepon9gok8Mee7uRahphah4EucoopheiYin4xe4lahn0goh"

Then the real fun started...

I starting looking around to understand how this is supposed to work, I wanted to provide a patch! So, it turns out that if you add some values to GConf this should work.

So, try to find where would that be. Read about GConf schemas, default and mandatory values, and their 10 possible locations. Find that Azureus provides an schema, use that to create one for Transmission. Then find that in fact, Transmission was providing defaults, which are not the same but work the same, and that they had an error there: yes, problem found! (#741069)

No! It turns out that the Gnome desktop does not use that any more, and now they scan the .desktop (who knows in which of the 100 directory tress where .destop files are present) files for MIME handlers, and the transmission-gtk.desktop file had that correctly. So why does it not work?

Well, it turns out that if I used gvfs-open instead of xdg-open, it did work! The thing is, I am running XFCE here, which is GTK based, but it is not Gnome: instead of gvfs-open, I was getting exo-open, which is it's brain-dead cousin, and can't do anything but files, email and web.

It is fecking 2014, and we still don't have a sensible, unified way to select preferred applications. We still have incompatible, duplicated, incomplete, competing implementations. We have FreeDesktop doing one thing to try and unify criteria, which is then ignored or mis-implemented up and down by some desktops and applications.

Some days I get really angry at the Free Software world.

PS: I guess I will tell mum to copy&paste the links from the browser to the torrent client, but not today. I have already lost 4 hours of sleep on this.

Martina Ferrari: Fun with Linux telephony

6 November, 2019 - 18:57

Continuing with my tendency to vent about stuff, today I want to talk about telephony.

Since a few years ago, I need to use different VoIP providers to keep in touch with friends and family, in different parts of the world.

So, I have a DID in Argentina (bought trough the excellent DIDWW), and another one in Ireland (which came for free when I was using BlueFace for call termination). I also have a service to handle outbound calls (FlowRoute), but it is not necessarily the only one I use, as cost and quality of Internet calls vary wildly. I have also used several Betamax providers, the aforementioned BlueFace, tried Netelip, etc.

This results in having soft-phones installed in my mobile devices and laptop, and a hardware SIP phone that I carry around; all of them having configurations for at least 3 different providers. This does not lead to hilarity.

In light of this, it's been a long time since I want to set up my own SIP router to be able to handle all of this, and be able to register to a single SIP proxy that will handle all the complexity.

Last Friday, the Irish DID decided to stop working. It turns out, that since I don't have my own setup, I was using that provider as a kind of hub, with their provided voicemail, and terminating the Argentinian DID there. So the damage was big.

This made me spend way too many hours during the weekend trying to set up some SIP solution. And I am not pleased.

Asterisk

First, I went with the old and known Asterisk. The default installation in Debian puts 95 configuration files in /etc/asterisk, which you are supposed to review and adjust. Yes, you've read correctly: ninety-five different configuration files. None of them having anything close to a sensible explanation of their syntax or function. Also, not a remote hint of consistency.

I could not find any configuration helper in the Debian archive, just hundreds of PHP-based projects scattered around the web. All the started guides I've found only guided you for the most basic tasks, but did not give you a way to have a functioning system.

Needless to say, after a short time I grew tired of this, and decided to try something else.

Way too many options

After this, I have spent an inordinate amount of hours, just trying to comprehend the difference between the gazillion different VoIP systems out there, I am still struggling to see that. Even if I understand that X is not a PBX, I don't exactly need a PBX, and most products deliver at least some of the features I need. It seems none of them makes a good job of just explaining what you can or can't do with their software.

Documentation is awful in all the projects I researched. When it is there at all, it is incomplete, maybe super detailed at points, but in most cases, there is just no big picture view of the system to just start understanding how things work, and how to find your way.

Sadly, not even the distros seems to be able to put a list of "recommended VoIP software for different needs".

Yate

Finally, I've found YATE (Yet Another Telephony Engine). It seemed promising: not too bloated, fairly extensible, and scriptable in a few languages!

Sadly, after many hours, it turned out to be a fiasco. The documentation seems decent, until you realise there are many key details left out. Basic information about how a call is handled cannot be found anywhere. Using the scripting power, I was able to find out at least the variables that were available, but that was not enough. I found a mailing list (with the worst archive reader I've seen in ages), where people made the very same questions I have, and nobody has replied. In years.

So here I am, stuck with not being able to tell if a call is coming from a random host on the internet, or one of my DIDs, or one of the authenticated clients. I guess I will have to start from scratch with Yet Another (Another) Telephony Engine.

Martina Ferrari: Fairytale of New York

6 November, 2019 - 18:57

Things I love about Ireland, partial list.

This:

[[!template Error: failed to process template <span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=blog%2Fposts%2FFairytale_of_New_York&amp;page=%2Ftemplates%2Fyoutube" rel="nofollow">?</a>youtube</span> template youtube not found ]]

Martina Ferrari: DebConf 13

6 November, 2019 - 18:57

On Sunday I arrived to DebConf 13. It has been so much fun that I didn't have the time to post anything about it!

As usual, I really enjoy meeting old friends and putting faces to nick names. Last night the Cheese and Wine party was once again great.

Not everything has been partying, though. I've been discussing with Enrico ideas for recognising Debian Contributors, as he presented on his talk on Sunday. We still have to discuss further, and obviously, sit down and write a lot of code

Yesterday we also met with Luk, and discussed what to do with the ancient net-tools package. We had had the idea of writing compatibility wrappers using iproute2, but that turned out to be too complicated and brittle. After looking a the current state of net-tools, and its reverse dependencies, we decided that the best way to go is to deprecate it by asking rdepends to migrate to iproute2 (for most of them it should be trivial), and then downgrade net-tools to optional. It won't be removed from the archive, as people will still want it, but it will not be required by any core functionality any more.

In the next few days, we will be sending an email to debian-devel, and filling about 80 bugs to get rid of the dependency on net-tools, many with patches.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้