Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 21 min ago

Russ Allbery: remctl 3.16

27 October, 2019 - 04:00

remctl is a simple RPC mechanism (generally based on GSS-API) with rich ACLs and native support in multiple languages.

The primary change in this release is support for Python 3. This has only been tested with Python 2.7 and Python 3.7, but should work with any version of Python 3 later than Python 3.1. This release also cleans up some obsolete syntax in the Python code and deprecates passing in a command as anything other than an iterable of str or bytes.

This release also adds a -t flag to the remctl command-line tool to specify the network timeout, fixes a NULL pointer dereference after memory allocation failure in the client library, adds GCC attributes to the client library functions, and fixes a few issues with the build system.

You can get the latest release from the remctl distribution page.

Arturo Borrero González: seville kubernetes meetup 2019-10-24 - summary

25 October, 2019 - 15:28

Yesterday I attended a meetup event in Seville organized by the SVK (seville kubernetes) group. The event was held in the offices of Bitnami, now a VMware business.

The agenda for the event consisted in a couple of talks strongly focused on kubernetes, both of which interested me personally.

First one was Deploying apps with kubeapps, a talk by Andres Martinez Gotor, engineer at Bitnami. He presented the kubeapps utility, which is an application dashboard for kubernetes developed by Bitnami. We got a variety of information, from how to use kubeapps, to how this integrates with helm/tiller, and how this works in a multi-tenant enabled cluster. Some comments were added from the security point of view, things to take into account, etc. In general, kubeapps seems easy to install and use, and enables end users to easily deploy arbitrary apps into kubernetes.

My feeling during the talk was that this technology is quite interesting for several use cases, including ours in Toolforge, where we allow users to run arbitrary (mostly webservices) apps in the platform. Enabling operations that doesn’t require users to dive into a terminal is always welcomed, since we offer our services to a wide range of users with very different technical backgrounds, knowledge and experience.

Next talk was A kube-proxy deep-dive, by Laura Garcia Liebana, engineer and founder of Zevenet. She started the talk by giving an overview on how docker uses iptables to set up networking and proxying. As she pointed out, the way docker does it has a direct influence on how kubernetes does the default networking, in the iptables-based kube-proxy component. On the many ways we have for load-balancing and network design for this kind of environments, kube-proxy uses by default an iptables ruleset that is not very performant. It generates about 4 iptables rules per endpoint which is not great for a kubernetes cluster with 10k endpoints (you would have 40k iptables rules in each node). It was mentioned that some people are using the ipvs-based kube-proxy component to gain a bit of performance.

But Laura had an even more interesting proposal. They are developing a new tool called kube-nftlb, which is a kube-proxy replacement based on nftlb, which is a load-balancing solution based on nftables. It seems kube-nftlb is still in the development stage, but in a live-demo she showed how the nftables rulesets generated by the tool are way more performant and optimized than those generated by kube-proxy, which results in greatly improved scalability of the kubernetes cluster.

After the talks, some pizza time followed, and I greeted many old and new friends. Interesting day! Thanks Bitnami for organizing the event and thanks to the speakers for giving us new ideas and points of views!

Dirk Eddelbuettel: dang 0.0.11: Small improvements

25 October, 2019 - 08:35

A new release of what may be my most minor package, dang, is now on CRAN. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!) is one, this overbought/oversold price band plotter from an older blog post is another. More recently added were helpers for data.table to xts conversion and a git repo root finder.

Some of these functions (like lsos()) where hanging in my .Rprofile, other just lived in scripts so some years ago I started to collect them in a package, and as of February this is now on CRAN too for reasons that are truly too bizarre to go about. It’s a weak and feeble variant of the old Torvalds line about backups and ftp sites …

As I didn’t blog about the 0.0.10 release, the NEWS entry for both follows:

Changes in version 0.0.11 (2019-10-24)
  • New functions getGitRoot, inGit and isConnected.

  • Improved function as.data.table.xts.

Changes in version 0.0.10 (2019-02-10)
  • Initial CRAN release. See ChangeLog for earlier changes.

Courtesy of CRANberries, there is a comparison to the previous release. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Norbert Preining: Calibre 4.2 based on Python3 in Debian/experimental

25 October, 2019 - 07:59

Following up on the last post on the state of Calibre in Debian, some news for those who want to get rid of Python2 on their computers as soon as possible.

With the finally finished Qt transition to 5.12, Calibre 4 can be included into Debian. I just have uploaded Calibre 4.2 build using Python 3 to the experimental suite in Debian. This allows all those who want to get rid of Python2 to upgrade to the experimental version.

WARNINGS

There are a few warnings you shouldn’t forget:

  • the Python3 version is experimental
  • practically all external plugins will be broken, and you will need to remove them from ~/.config/calibre/plugins

That’s it, happy e-booking!

Steinar H. Gunderson: Exploring the DDR arcade CDs

24 October, 2019 - 04:45

Dance Dance Revolution, commonly known as DDR, is/was a series of music games by Konami, where the player has to hit notes by stepping on four panels in time with music. I say “was” because while there are still new developments, the phenomenon has largely faded, at least in my parts of the world.

Back in the heyday, the arcade machines (beasts of 220 kg, plus the two pads weighing 100 kg each!) were based off of Konami's System 573 (573 is chosen because with the appropriate amount of creative readings in Japanese, you can make it sound like “go-na-mi”). System 573 (well, 573D, to be exact) is basically a Playstation with a custom controller connector and an I/O board capable of decoding MP3s. The songs are loaded from a regular CD-ROM.

Recently, MAME developers have cracked the encryption used in S573 so as to be able to emulate the system (a heroic effort!), which allowed me to finally have a look at what's going on in the ISOs. I wasn't involved in this at all, but you can have a look at the source code at GitHub.

The algorithm is home-grown (curiously enough, neither a pure block cipher nor a pure stream cipher) and naturally very weak, but it kept people at bay for 10+ years, so I guess it was a success nevertheless? A quick rundown goes as follows (this is the decryption routine; reverse for encryption):

  1. A 16-bit ciphertext word, V, is read from the input. (I had to do byteswapping, but I don't know if this is just because of my internals or because something in the 573 naturally works in different endians.)
  2. Two 16-bit keys, S1 and S2, are XOR-ed to form a temporary subkey M. Depending on certain bits in M, neighboring bits in V are swapped.
  3. Certain other bits (eight of them) in M are extracted, interleaved with zero bits, and XOR-ed into V.
  4. A 8-bit counter (it starts at the value S3, which is then the last part of the key), is extended to 16 bits by shuffling and duplicating the bits around in a fixed pattern, and then XOR-ed into V, producing tha plaintext word.
  5. S1 is rotated to one bit to the right, except that the sign bit (the 15th bit) just stays still.
  6. If the 0th and 15th bits of S1 are unequal, S2 is rotated to the right by one bit (this time, the sign bit is like any other bit).
  7. S3 is incremented by one.

The key is different per-file; someone has published a long list of known keys. I don't know where these come from (perhaps extracted from the images themselves), but I thought it would be a fun challenge to do some entry-level cryptanalysis and see if I could crack a few of the files myself.

There are a couple of observations we can make right away. First, due to the way S1/S2 are updated, and because only every their XOR is used for anything, we can invert both of them to get an equivalent key. (That is; if {S1,S2,S3} form a key, {S1 XOR 0xFFFF,S2 XOR 0xFFFF,S3} will have the exact same effect.) This reduces the effective keyspace from 40-bit to 39-bit. I'm fairly certain this was unintentional.

Second, the algorithm splits very naturally into two orthogonal pieces; the S1/S2 part does one thing, the S3 part does something else, and they don't really interact (they run cleanly after each other). My algorithm was a simple known-plaintext attack; since most of the MP3s seem to have a few large blocks of zeros in the first couple of hundred bytes, we can test the entire keyspace and check if we suddenly get a lot of zero bytes after decryption. But due to this orthogonality, we can first apply a given S1/S2, store the result, and then compare it against all possible offsets in the S3 keystream. (The S3 step just XORs in a known sequence, after all; all we need to figure out what the offset in the sequence is.) This saves a lot of time, especially as we can precompute the S3 sequence. It's a good example of how testing N keys is much cheaper than N times testing one key, due to structural similarities between the operations given by the key.

Third, while the S1/S2 steps are fairly slow in traditional hardware, they map really well to the PDEP/PEXT instructions in BMI2. Add some tables, some AVX2, and we can bruteforce the entire keyspace in about half a core-hour (and it's all trivially parallelizable).

And finally, note that the avalanche effect by the S1/S2 steps is really poor. There is this effect where if your key is only off by a few bits, your plaintext will also by wrong by only a few bits. You can see this either as a blessing (if you suddenly start getting zero bytes, you know you are very close to the key and can just try flipping a few bits here and there) or a curse (it's much harder to know whether you have the right key or not, since your plaintext looks “almost” right even if it does not form a valid MP3 file).

So, with that in mind, I cracked most of the files on the EuroMix 2 and DDR Extreme CDs. I was pleased to hear that the files would play with no issues whatsoever in VLC after decryption; unfortunately, there were no interesting ID3 tags or the likes. (A fair amount of files had a “LAME3.92” tag, though. Strangely, not all of them. Obviously, Konami's asset management didn't involve batch encoding all songs from golden masters.)

All the files are encoded in CBR; I don't know if this is because the 573D can't handle VBR, or if Konami just didn't care all that much. EM2 is in 160 kbit/sec, and Extreme is in a more paltry 112 kbit/sec. Some of the preview snippets are in as little as 56 kbit/sec, and encoded in 32 kHz instead of 44.1! (I always thought the short for Electro Tuned sounded a bit off, and now I finally understand why.)

Nearly all of the songs include a second of three of dead silence at the start, presumably to give the player a bit of time to find the scrolling arrows on the screen. I always intuitively interpreted this as loading time, but it's really part of the MP3—and given CBR, it wastes space on the disc. Similarly, the previews (which are looped) include the fade-out, but this is more forgiveable.

Finally, we can uncover a mystery that's been bothering players for a long time; DDR Extreme features 240 songs (when all are unlocked), but there are only 239 song files on the CD. (All 240 previews are there, though, plus some menu music and such.) Which one is missing, and where is it? After some searching and correlating with the previews, I found an answer that will probably surprise nobody: It's indeed the One More Extra Stage song, Dance Dance Revolution! And for some inexplicable reason, it is stored in the flash image. Given the consequences, I guess they didn't want OMES to skip, ever...

Dirk Eddelbuettel: linl 0.0.4: Now with footer

23 October, 2019 - 18:48

A new release of our linl package for writing LaTeX letters with (R)markdown just arrived on CRAN. linl makes it easy to write letters in markdown, with some extra bells and whistles thanks to some cleverness chiefly by Aaron.

This version now supports a (pdf, png, …) footer along with the already-supported header, thanks to an intiial PR by Michal Bojanowski to which Aaron added nice customization for scale and placement (as supported by LaTeX package wallpaper). I also added support for continued integration testing at Travis CI via a custom Docker RMarkdown container—which is something I should actually say more about at another point.

Here is screenshot of the vignette showing the simple input for some moderately fancy output (now with a footer):

The NEWS entry follows:

Changes in linl version 0.0.4 (2019-10-23)
  • Continuous integration tests at Travis are now running via custom Docker container (Dirk in #21).

  • A footer for the letter can now be specified (Michal Bojanowski in #23 fixing #10).

  • The header and footer options be customized more extensively, and are documented (Aaron in #25 and #26).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the linl page. For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Bits from Debian: The Debian Project stands with the GNOME Foundation in defense against patent trolls

23 October, 2019 - 15:00

In 2012, the Debian Project published our Position on Software Patents, stating the threat that patents pose to Free Software.

The GNOME Foundation has announced recently that they are fighting a lawsuit alleging that Shotwell, a free and Open Source personal photo manager, infringes a patent.

The Debian Project firmly stands with the GNOME Foundation in their efforts to show the world that we in the Free Software communities will vigorously defend ourselves against any abuses of the patent system.

Please read this blog post about GNOME's defense against this patent troll and consider making a donation to the GNOME Patent Troll Defense Fund.

Steve Kemp: /usr/bin/timedatectl

23 October, 2019 - 13:30

Today I was looking over a system to see what it was doing, checking all the running processes, etc, and I spotted that it was running openntpd.

This post is a reminder to myself that systemd now contains an NTP-client, and I should go round and purge the ntpd/openntpd packages from my systems.

You can check on the date/time via:

$ timedatectl 
                      Local time: Wed 2019-10-23 09:17:08 EEST
                  Universal time: Wed 2019-10-23 06:17:08 UTC
                        RTC time: Wed 2019-10-23 06:17:08
                       Time zone: Europe/Helsinki (EEST, +0300)
       System clock synchronized: yes
systemd-timesyncd.service active: yes
                 RTC in local TZ: no

If the system is not setup to sync it can be enabled via:

$ sudo timedatectl set-ntp true

Finally logs can be checked as you would expect:

$ journalctl -u systemd-timesyncd.service

Shirish Agarwal: Lives around a banking crisis

23 October, 2019 - 03:50

First of all I will share a caricature and a joke shared by me as the rest of the blog post would be serious as it involves what is happening in India right now and the lack of sensitivity to it.

Caricature of me by @caticaturewale on twitter.

The above is a caricature of me done by @caricaturewale. If you can’t laugh at yourself, you can’t laugh at the world, can you ? I did share my thankfulness with the gentleman who created it. It surely must have taken quite sometime to first produce a likeness and then fiddle around the features so it produces somewhat laughable picture. So all the kudos to him for making this gem

PMC Bank

The other joke of the day, no the week or perhaps the month was a conversation between a potential bank deposit holder and a banker or somebody calling on behalf of bank.

I got a call from bank.
They said: “U pay us ₹ 6000 every month. U will get ₹ 1 crore when U retire”.
I replied: “U reverse the plan, U give me 1 crore now. And I will pay U ₹ 6000 every month till I die.”
The banker disconnected the call. Did I say anything wrong???

Poonam Datta, Maritime and Logistics, IMD, Switzerland on twitter.

Now where is this coming from ? I am sure some of you may have seen my last blog post which shared about the travails of PMC bank last week. There have been lot of developments which have happened since that. What I had forgotten to share that day is that the crisis in PMC Bank has been such that lives are being lost. From last week till date, 5 lives have been lost due to the PMC bank debacle. The latest being Bharati Sadarangani, a 73 year old septuagenarian who suffered a cardiac arrest as she had her life-savings, her daughter and son-in-law’s all life savings in that bank account. Now the son-in-law blames himself and his wife as the wife used to share her uncertainities with her mom. These people and all the others need urgent psychological help and counselling. In fact, while I don’t know them, I had suggested to some people to see if they can reach to these people and maybe some psychologists team can help them. While moneylife foundation is helping the distressed to write writ petitions and provide legal help, there is definitely need of medical counselling so no more suicides, panic attacks or cardiac arrests happen. But this isn’t the end of the story but is beginning of one.

Deposit Insurance and Credit Guarantee Corporation

The above I shared is a stamp by Dicgc which stands for Deposit Insurance and Credit Guratantee Corporation shared by a gentleman on twitter. The stamp claims and I quote that DICGC is liable for only INR 1 lakh rupees. Now many people were surprised where did the INR 1 lakh rupee come from ? While it is aptly documented in many places, I am going to use CNBC News 18 coverage to share the history of the liability of a bank which dates it back to 1993 where it was changed last to INR 1 lakh rupees. While I did my own calculations using the simple formulae of currency exchange, I would submit to Mr. Raghav Bahl’s slightly superior method where he has taken also the Indian GDP growth also into account which comes to INR 1,00,000.

The method I used is below –

The exchange rate of dollar in 1993 was 28.13 INR for 1 USD. For an amount of INR 1 lakh it comes to USD 3,555 at that time. Using just the simplest inflation tool it comes to date at USD 20,380 as of date which comes to INR 14,43,566.35 . This comes with some base assumptions that the tools and calculations I did were and are in-line with what GOI gives. One point to note though is what the GOI declares as inflation is usually at least a percent or two lower than what the real inflation is at the very least. At least, that’s what I have observed for the last twenty odd years or so. On top of it I didn’t even use the GDP multiple which Mr. Bahl did.

As far as the argument which some people have or may have it’s unlikely that many people may have had that much money, that amount was done by GOI using a variety of factors, including one would suppose the average balance of a middle-class salaried person. If one were to take gold or some other indicator I’m sure they probably would be a lot more higher. This is most conservative estimate I made. To have a higher limits in insurance to the deposit, it would need to be a people’s movement, otherwise it could be INR 1 lakh for all eternity, or INR 3 lakhs, IF the government feels it’s ok with that

Why bother with such calculations ?

One could ask the question why bother with such calculations ? The reason is very simple. While the issue came at a co-operative bank, the amount of insurance limitation is not just limited to a co-operative bank but to all banks, whether they are co-operative, National or even private banks. While I live in Pune, and always have made some very basic lifestyle choices, in Mumbai, the same lifestyle choices will set anyone back at least INR 30-40k. Somebody shared that their 1 BHK apartment electricity costs are INR 15-16k/- monthly while it fluctuates between INR 900-1k/- for the same in Pune. So in that context, putting a budget of INR 30 – 40K/- may be probably conservative for a single person, if it’s a family or 2 or 4, which is common would probably be much more. So in any such scenario, it affects all of us. And this is not taking into account any medical emergencies or credit needs for some unplanned event.

The way out

What is the way out ? While those who are stuck in PMC Bank do not have a way out unless RBI takes some firm actions and help the ones in distress , those who are not are equally in distress because they do not know what to do or where to go. While I’m no investment adviser, just don’t put all your eggs in one basket. Put some in different post schemes, some in PPF, some in gold bars or coins certified by some nationalized banker or something. And most importantly of all, be prepared for a loss no matter what you choose. In a recessionary economy, while there may be some winners, there would be lot more loosers. And while people will attempt to seduce you with some cheap valuations, this is probably not the best time to indulge into fantasies. You would be better off spending time with friends and loved ones or into books.

Books

If you want to indulge into fantasy, go buy a book at Booksbyweight or Booksbykilo rather than indulge in needless speculation. At the very least you would have an asset or commodity which will give you countless hours of pleasure and is a tradable commodity with friends and enemies over a lifetime. And if you are a book lover, reader, writer, romantic like me they can be the best friends as well. Unlike friends or loved ones who would demand time, they are for you whenever you are ready for them rather than vice-versa.

Dirk Eddelbuettel: pkgKitten 0.1.5: Creating R Packages that purr

22 October, 2019 - 19:52

Another minor release 0.1.5 of pkgKitten just hit on CRAN today, after a break of almost three years.

This release provides a few small changes. The default per-package manual page now benefits from a second refinement (building on what was introduced in the 0.1.4 release) in using the Rd macros referring to the DESCRIPTION file rather than duplicating information. Several pull requests fixes sloppy typing in the README.md, NEWS.Rd or manual page—thanks to all contributors for fixing these. Details below.

Changes in version 0.1.5 (2019-10-22)
  • More extensive use of newer R macros in package-default manual page.

  • Install .Rbuildignore and .gitignore files.

  • Use the updated Travis run script.

  • Use more Rd macros in default 'stub' manual page (#8).

  • Several typos were fixed in README.md, NEWS.Rd and the manual page (#9, #10)

More details about the package are at the pkgKitten webpage and the pkgKitten GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Keith Packard: picolibc-updates

22 October, 2019 - 12:34
Picolibc Updates (October 2019)

Picolibc is in pretty good shape, but I've been working on a few updates which I thought I'd share this evening.

Dummy stdio thunk

Tiny stdio in picolibc uses a global variable, __iob, to hold pointers to FILE structs for stdin, stdout, and stderr. For this to point at actual usable functions, applications normally need to create and initialize this themselves.

If all you want to do is make sure the tool chain can compile and link a simple program (as is often required for build configuration tools like autotools), then having a simple 'hello world' program actually build successfully can be really useful.

I added the 'dummyiob.c' module to picolibc which has an iob variable initialized with suitable functions. If your application doesn't define it's own iob, you'll get this one instead.

$ cat hello.c
#include <stdio.h>

int main(void)
{
    printf("hello, world\n");
}
$ riscv64-unknown-elf-gcc -specs=picolibc.specs hello.c
$ riscv64-unknown-elf-size a.out
   text    data     bss     dec     hex filename
    496      32       0     528     210 a.out
POSIX thunks

When building picolibc on Linux for testing, it's useful to be able to use glibc syscalls for input and output. If you configure picolibc with -Dposix-io=true, then tinystdio will use POSIX functions for reading and writing, and also offer fopen and fdopen functions as well.

To make calling glibc syscall APIs work, I had to kludge the stat structure and fcntl bits. I'm not really happy about this, but it's really only for testing picolibc on a Linux host, so I'm not going to worry too much about it.

Remove 'mathfp' code

The newlib configuration docs aren't exactly clear about what the newlib/libm/mathfp directory contains, but if you look at newlib faq entry 10 it turns out this code was essentially a failed experiment in doing a 'more efficient' math library.

I think it's better to leave 'mathfp' in git history and not have it confusing us in the source repository, so I've removed it along with the -Dhw-fp option.

Other contributions

I've gotten quite a few patches from other people now, which is probably the most satisfying feedback of all.

  • powerpc build patches
  • stdio fixes
  • cleanup licensing, removing stale autotools bits
  • header file cleanups from newlib which got missed
Semihosting support

RISC-V and ARM both define a 'semihosting' API, which provides APIs to access the host system from within an embedded application. This is useful in a number of environments:

  • GDB through OpenOCD and JTAG to an embedded device
  • Qemu running bare-metal applications
  • Virtual machines running anything up to and including Linux

I really want to do continuous integration testing for picolibc on as many target architectures as possible, but it's impractical to try and run that on actual embedded hardware. Qemu seems like the right plan, but I need a simple mechanism to get error messages and exit status values out from the application.

Semihosting offers all of the necessary functionality to run test without requiring an emulated serial port in Qemu and a serial port driver in the application.

For now, that's all the functionality I've added; console I/O (via a definition of _iob) and exit(2). If there's interest in adding more semihosting API calls, including file I/O, let me know.

I wanted to make semihosting optional, so that applications wouldn't get surprising results when linking with picolibc. This meant placing the code in a separate library, libsemihost. To get this linked correctly, I had to do a bit of magic in the picolibc.specs file. This means that selecting semihost mode is now done with a gcc option, -semihost', instead of just adding -lsemihost to the linker line.

Semihosting support for RISC-V is already upstream in OpenOCD. I spent a couple of hours last night adapting the ARM semihosting support in Qemu for RISC-V and have pushed that to my riscv-semihost branch in my qemu project on github

A real semi-hosted 'hello world'

I've been trying to make using picolibc as easy as possible. Learning how to build embedded applications is hard, and reducing some of the weird tool chain fussing might make it easier. These pieces work together to simplify things:

  • Built-in crt0.o
  • picolibc.specs
  • picolibc.ld
  • semihost mode

Here's a sample hello-world.c:

#include <stdio.h>
#include <stdlib.h>

int main(void)
{
    printf("hello, world\n");
    exit(0);
}

On Linux, compiling is easy:

$ cc hello-world.c 
$ ./a.out 
hello, world
$

Here's how close we are to that with picolibc:

$ riscv64-unknown-elf-gcc -march=rv32imac -mabi=ilp32 --specs=picolibc.specs -semihost -Wl,-Tqemu-riscv.ld hello-world.c
$ qemu-system-riscv32 -semihosting -machine spike -cpu rv32imacu-nommu -kernel a.out -nographic
hello, world
$

This requires a pile of options to specify the machine that qemu emulates, both when compiling the program and again when running it. It also requires one extra file to define the memory layout of the target processor, 'qemu-riscv.ld':

__flash = 0x80000000;
__flash_size = 0x00080000;
__ram = 0x80080000;
__ram_size = 0x40000;
__stack_size = 1k;

These are all magic numbers that come from the definition of the 'spike' machine in qemu, which defines 16MB of RAM starting at 0x80000000 that I split into a chunk for read-only data and another chunk for read-write data. I found that definition by looking in the source; presumably there are easier ways?

Larger Examples

I've also got snek running on qemu for both arm and riscv processors; that exercises a lot more of the library. Beyond this, I'm working on freedom-metal and freedom-e-sdk support for picolibc and hope to improve the experience of building embedded RISC-V applications.

Future Plans

I want to get qemu-based testing working on both RISC-V and ARM targets. Once that's running, I want to see the number of test failures reduced to a more reasonable level and then I can feel comfortable releasing version 1.1. Help on these tasks would be greatly appreciated.

Dirk Eddelbuettel: digest 0.6.22: More goodies!

22 October, 2019 - 08:31

A new version of digest arrived at CRAN earlier today, and I just sent an updated package to Debian too.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 868k monthly downloads) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release comes pretty much exactly one month after the very nice 0.6.21 release but contains five new pull requests. Matthew de Queljoe did a little bit of refactoring of the vectorised digest function he added in 0.6.21. Ion Suruceanu added a CFB cipher for AES. Bill Denney both corrected and extended sha1. And Jim Hester made the windows-side treatment of filenames UTF-8 compliant.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Chris Lamb: Tour d'Orwell: Rue du Pot-de-Fer

21 October, 2019 - 22:38

In yet another George Orwell-themed jaunt, just off the «Place de la Contrescarpe» in the Vᵉ arrondissement (where the Benalla affair took place) is the narrow «Rue du Pot-de-Fer» where Orwell lived for 18 months à number six.

He would have not have written a glowing TripAdvisor review of the hotel he stayed at: "the walls were as thin as matchwood and to hide the cracks they had been covered with layer after layer of pink paper [and] long lines of bugs marched all day long like columns of soldiers and at night came down ravenously hungry so that one had to get up every few hours and kill them."

Skipping the shisha bar that exists on le premiere étage, whilst I was not "ravenously" hungry I couldn't resist the escargot in the nearby square whilst serenaded by an accordion player clearly afflicted with ennui or some other stereotypically-Parisien malady…

Norbert Preining: Pleasures of Tibetan input and typesetting with TeX

21 October, 2019 - 13:23

Many years ago I decided to learn Tibetan (and the necessary Sanskrit), and enrolled in the university studies of Tibetology in Vienna. Since then I have mostly forgotten Tibetan due to absolute absence of any practice, save for regular consultations from a friend on how to typeset Tibetan. In the future I will write a lengthy article for TUGboat on this topic, but here I want to concentrate only on a single case, the character ཨྰོཾ.Since you might have a hard time to get it rendered correctly in your browser, here is in an image of it.

In former times we used ctib to typeset Tibetan, but the only font that was usable with it is a bit clumsy, and there are several much better fonts now available. Furthermore, always using transliterated input instead of the original Tibetan text might be a problem for people outside academics of Tibetan. If we want to use one of these (obviously) ttf/otf fonts, a switch to either XeTeX or LuaTeX is required.

It turned out that the font my friends currently uses, Jomolhari, although it is beautiful, is practically unusable. My guess is that the necessary tables in the truetype font are not correctly initialized, but this is detective work for later. I tried with several other fonts, in particular DDC Uchen, Noto Serif Tibetan, Yagpo Uni, and Qomolangma, and obtained much better results. Complicated compounds like རྣྲི་ did properly render in both LuaTeX (in standard version as well as the HarfBuzz based version LuaHBTeX) and XeTeX. Others, like ཨོྂ་ only properly with Noto Serif Tibetan.

But there was one combination that escaped correct typesetting: ཨྰོཾ་ which is the famous OM with a subjoint a-chung. It turned out that with LuaLaTeX it worked, but with LuaHBLaTeX (with actually enabled harfbuzz rendering) and XeTeX the output was a a separate OM followed by a dotted circle with subjoint a-chung.

The puzzling thing about it was, that it was rendered correctly in my terminal, and even in my browser – the reason was it uses the font Yagpo Uni. Unfortunately other combinations did render badly with Yagpo Uni, which is surprising because it seems that Yagpo Uni is one of the most complete fonts with practically all stacks (not only standard ones) precomposed as glyphs in the font.

With help of the luaotfload package maintainers we could dissect the problem:

  • The unicode sequence was U+0F00 U+0FB0, which is the pre-composed OM with the subjoined a-chung
  • This sequence was inputted via ETWS (M17N) by entering oM+' which somehow feels natural to use
  • HarfBuzz decomposes U+0F00 into U+0F68 U+0F7C U+0F7E according to the ccmp table of the font
  • HarfBuzz either does not recognize U+0F00 as a glyph that can get further consonants subjoined, or the decomposed sequence followed by the subjoined a-chung U+0F68 U+0F7C U+0F7E U+0FB0 cannot be rendered correctly, thus HarfBuzz renders the subjoined a-chung separately under the famous dotted circle

Having this in mind, it was easy to come up with a correct input sequence to achieve the correct output: a+'oM which generates the unicode sequence

  U+0F68 TIBETAN LETTER A
  U+0F7C TIBETAN VOWEL SIGN O
  U+0F7E TIBETAN SIGN RJES SU NGA RO
  U+0FB0 TIBETAN SUBJOINED LETTER -A

which in turn is correctly rendered by HarfBuzz, and thus by both XeTeX and LuaHBTeX with HarfBuzz rendering enabled.

Here some further comments concerning HarfBuzz based rendering (luahbtex-dev, xetex, libreoffice) versus luatex:

  • Yagpo Uni has lots of pre-composed compounds, but is rather picky about how they are inputted. The above letter renders correctly when inputed as oM+' (that is U+0F00 U+0FB0), but incorrectly when inputted as a+'oM.
  • Noto Serif Tibetan accepts most input from EWTS correctly, but oM+' does not work, while a+'oM does (opposite of Yagpo Uni!)
  • Concerning ཨོྂ་ (input o~M`) only Noto Serif Tibetan and Qomolangma works, unfortunately Yagpo is broken here. I guess a different input format is necessary for this.
  • Qomolangma does not use the correct size of a-chung in ligatures that are more complex than OM
  • Jomolhari seems to be completely broken with respect to normal input

All this tells me that there is really a huge difference in what fonts expect, how to input, how to render, and with such endangered minority languages like Tibetan the situations seems to be much worse.

Concerning the rendering of ཨྰོཾ་ I am still not sure where to send a bug report: One could easily catch this order of characters in the EWTS M17N input definition, but that would fix it only for EWTS and other areas would be left out, in particular LibreOffice rendering as well as any other HarfBuzz based application. So I would rather see it fixed in HarfBuzz (and I am currently testing code). But all the other problems that appear are still unsolved.

Jonathan McDowell: Debian Buster / OpenWRT 18.06.4 upgrade notes

21 October, 2019 - 00:30

Yesterday was mostly a day of doing upgrades (as well as the usual Saturday things like a parkrun). First on the list was finally upgrading my last major machine from Stretch to Buster. I’d done everything else and wasn’t expecting any major issues, but it runs more things so there’s more generally the opportunity for problems. Some notes I took along the way:

  • apt upgrade updated collectd but due to the loose dependencies (the collectd package has a lot of plugins and chooses not to depend on anything other than what the core needs) libsensors5 was not pulled in so the daemon restart failed. This made apt/dpkg unhappy until I manually pulled in libsensors5 (replacing libsensors4).
  • My custom fail2ban rule to drop spammers trying to register for wiki accounts too frequently needed the addition of a datepattern entry to work properly.
  • The new version of python-flaskext.wtf caused deprecation warnings from a custom site I run, fixed by moving from Form to FlaskForm. I still have a problem with a TypeError: __init__() takes at most 2 arguments (3 given) error to track down.
  • ejabberd is still a pain. This time the change of the erlang node name caused issues with it starting up. There was a note in NEWS.Debian but the failure still wasn’t obvious at first.
  • For most of the config files that didn’t update automatically I just did a manual vimdiff and pulled in the updated comments; the changes were things I wanted to keep like custom SSL certificate configuration or similar.
  • PostgreSQL wasn’t as smooth as last time. A pg_upgradecluster 9.6 main mostly did the right thing (other than taking ages to migrate the SpamAssassin Bayes database), but left 9.6 still running rather than 11.
  • I’m hitting #924178 in my duplicity backups - they’re still working ok, but might be time to finally try restic

All in all it went smoothly enough; the combination of a lot of packages and the PostgreSQL migration caused most of the time. Perhaps it’s time to look at something with SSDs rather than spinning rust (definitely something to check out when I’m looking for alternative hosts).

The other major-ish upgrade was taking my house router (a repurposed BT HomeHub 5A) from OpenWRT 18.06.2 to 18.06.4. Not as big as the Debian upgrade, but with the potential to leave me with non-working broadband if it went wrong1. The CLI sysupgrade approach worked fine (as it has in the past), helped by the fact I’ve added my MQTT and SSL configuration files to /etc/sysupgrade.conf so they get backed up and restored. OpenWRT does a full reflash for an upgrade given the tight flash constraints, so other than the config files that get backed up you need to restore anything extra. This includes non-default packages that were installed, so I end up with something like

opkg update
opkg install mosquitto-ssl 6in4 ip-tiny picocom kmod-usb-serial-pl2303

And then I have a custom compile of the collectd-mod-network package to enable encryption, and my mqtt-arp tool to watch for house occupants:

opkg install collectd collectd-mod-cpu collectd-mod-interface \
     collectd-mod-load collectd-mod-memory collectd-mod-sensors \
     /tmp/collectd-mod-network_5.8.1-1_mips_24kc.ipk
opkg install /tmp/mqtt-arp_1.0-1_mips_24kc.ipk

One thing that got me was the fact that installing the 6to4 package didn’t give me working v6, I had to restart the router for it to even configure up its v6 interfaces. Not a problem, just didn’t notice for a few hours2.

While I was on a roll I upgraded the kernel on my house server to the latest stable release, and Home Assistant to 0.100.2. As expected neither had any hiccups.

  1. Of course I have a spare VDSL modem/router that I can switch in, but it’s faff I prefer to avoid. 

  2. And one I shouldn’t hit in the future, as I’m moving to an alternative provider with native IPv6 this week. 

Dirk Eddelbuettel: RcppGSL 0.3.7: Fixes and updates

20 October, 2019 - 22:01

A new release 0.3.7 of RcppGSL is now on CRAN. The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

Stephen Wade noticed that we were not actually freeing memory from the GSL vectors and matrices as we set out to do. And he is quite right: a dormant bug, present since the 0.3.0 release, has now been squashed. I had one boolean wrong, and this has now been corrected. I also took the opportunity to switch the vignette to prebuilt mode: Now a pre-made pdf is just included in a Sweave document, which makes the build more robust to tooling changes around the vignette processing. Lastly, the package was converted to the excellent tinytest unit test framework. Detailed changes below.

Changes in version 0.3.7 (2019-10-20)
  • A logic error was corrected in the wrapper class, vector and matrix memory is now properly free()'ed (Dirk in #22 fixing #20).

  • The introductory vignettes is now premade (Dirk in #23), and was updated lightly in its bibliography handling.

  • The unit tests are now run by tinytest (Dirk in #24).

Courtesy of CRANberries, a summary of changes to the most recent release is also available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Daniel Silverstone: A quarter in review - Nearly there, 2020 in sight

20 October, 2019 - 22:00
The 2019 plan - Third-quarter review

At the start of the year I blogged about my plans for 2019. For those who don't want to go back to read that post, in summary they are:

  1. Continue to lose weight and get fit. I'd like to reach 80kg during the year if I can
  2. Begin a couch to 5k and give it my very best
  3. Focus my software work on finishing projects I have already started
  4. Where I join in other projects be a net benefit
  5. Give back to the @rustlang community because I've gained so much from them already
  6. Be better at tidying up
  7. Save up lots of money for renovations
  8. Go on a proper holiday

At the point that I posted that, I promised myself to do quarterly reviews and so here is the second of those. The first can be found here, and the second here.

1. Weight loss

So when I wrote in July, I was around 83kg. I am very sad to report that I did not manage to drop another 5kg, I'm usually around 81.5kg at the moment, though I do peak up above 84kg and down as far as 80.9kg. The past three months have been an exercise in incredible frustration because I'd appear to be making progress only to have it vanish in one day.

I've continued my running though, and cycling, and I've finally gone back to the gym and asked for a resistance routine to compliment this, so here's hoping.

Yet again, I continue give myself a solid "B" for this, though if I were generous, given everything else I might consider a "B+"

2. Fitness (was Couch to 5k)

When I wrot ein July, I was pleased to say that I was sub 28m on my parkrun. I'm now consistently sub 27:30, and have personal bests of 26:18 and 26:23 at two of my local parkruns.

I have started running with my colleagues once a week, and that run is a bit longer (5.8 to 7km depending on our mood) and while I've only been out with them a couple of times so far, I've enjoyed running in a small group. With the weather getting colder I've bought myself some longer sleeved tops and bottoms to hopefully allow me to run through the winter. My "Fitness age" is now in the mid 40s, rather than high 60s, so I'm also approaching a point where I'm as fit as I am old, which is nice.

So far, so good, I'm continuing with giving myself an "A+"

3. Finishing projects

This is a much more difficult point for me this year, sadly. I continued to do some work on on NetSurf this quarter. We had another amazing long-weekend where we worked on a whole bunch of NS stuff, and I've even managed to give up some of my other spare time to resolve bugs, though they tend to be quite hard and I'm quite slow. I'm very pleased with how I've done with that.

Lars and I continue to work on our testing project, now called Subplot. Though, frankly, Lars does almost all of the work on this.

I did accidentally start another project (remsync) after buying a reMarkable tablet. So that bumps my score down a half point.

So over-all, this one drops to "C-", from the "C" earlier in the year - still (barely) satisfactory but could do a lot better.

4. Be a net benefit

My efforts for Debian continue to be restricted, though I hope it continues to just about be a net benefit to the project. My efforts with the Lua community have not extended again, so pretty much the same.

I remain invested in Rust stuff, and have managed (just about) to avoid starting in on any other projects, so things are fairly much the same as before. I lead the Rust installer working group and we recently released a huge update to rustup which adds a bunch of desired stuff.

While the effects of my Rust work affect both this and the next section, I am very pleased with how I did and have upgraded myself to an "A-" for this.

5. Give back to the Rust community

I have worked very hard on my Rustup work, and I have also started to review documentation and help updates for the Rust compiler itself. I've become involved in the Sequoia project, at least peripherally, and have attended a developer retreat with them which was both relaxing and productive.

I feel like the effort I'm putting into Rust is being recognised in ways I did not expect nor hope for, but that's very positive and has meant I've engaged even more with the community and feel like I'm making a valuable contribution.

I still hang around on the #wg-rustup Discord channel and other channels on that server, helping where I can, and I've been trying to teach my colleagues about Rust so that they might also contribute to the community.

So initially an 'A', I dropped to an 'A-' last time, but I feel like I've put enough effort in to give myself 'A+' this time.

6. Be better at tidying up

I've managed to do a bit more tidying, but honestly this is still pretty bad. I managed to clean up some stuff, but then it slips back into mess. The habit forming is still not happening. At this point I think I really need to grab the bull by the horns and focus on this one, so it'll be better in the next report I promise.

I'm upgrading to an 'E' because I am making some difference, just not enough.

7. Save up money for renovations

We spent those savings on our renovations, but I do continue to manage to put some away. As you can see in the next section though, I've been spending money on myself too.

I think I get to keep an 'A' here, but only just.

8. Go on a proper holiday

I spent a week with the Sequoia-PGP team in Croatia which was amazing. I have a long weekend planned with them in Barcelona for Rustfest too. Some people would say that those aren't real holidays, but I relaxed, did stuff I enjoyed, tried new things, and even went on a Zip-line in Croatia, so I'm counting it as a win.

While I've not managed to take a holiday with Rob, he's been off to do things independently, so I'm upgrading us to a 'B' here.

Summary

Last quarter I had a B+, A+, C, B, A-, F, A, C+, which ignoring the F is a was better than earlier in the year, though still not great.

This quarter I have a B+, A+, C-, A-, A+, E, A, B. The F has gone which is nice, and I suppose I could therefore call that a fair A- average, or perhaps C+ if I count the E.

Molly de Blanc: Autonomy and consent

18 October, 2019 - 20:35

When I was an undergraduate, I took a course on medical ethics. The core takeaways from the class were that autonomy is necessary for consent, and consent is necessary for ethical action.

There is a reciprocal relationship between autonomy and consent. We are autonomous creatures, we are self-governing. In being self-governing, we have the ability to consent, to give permission to others to interact with us in the ways we agree on. We can only really consent when we are self-governing, otherwise, it’s not proper consent. Consent also allows us to continue to be self-governing. By giving others permission, we are giving up some control, but doing so on our own terms.

In order to actually consent, we have to grasp the situation we’re in, and as much about it as possible. Decision making needs to come from a place of understanding.

It’s a fairly straightforward path when discussing medicine: you cannot operate on someone, or force them to take medication, or any other number of things without their permission to do so, and that their permission is based on knowing what’s going on.

I cannot stress how important it is to transpose this idea onto technology. This is an especially valuable concept when looking at the myriad ways we interact with technology, and especially computing technology, without even being given the opportunity to consent, whether or not we come from a place of autonomy.

At the airport recently, I heard that a flight was boarding with facial recognition technology. I remembered reading an article over the summer about how hard it is to opt-out. It gave me pause. I was running late for my connection and worried that I would be put in a situation where I would have to choose between the opt-out process and missing my flight. I come from a place of greater understanding than the average passenger (I assume) when it comes to facial recognition technology, but I don’t know enough about its implementation in airports to feel as though I could consent. Many people approach this from a place even with much less understanding than I have.

From my perspective, there are two sides to understanding and consent: the technology itself and the way gathered data is being used. I’m going to save those for a future blog post, but I’ll link back to this one, and edit this to link forward to them.

Matthew Garrett: Letting Birds scooters fly free

18 October, 2019 - 18:44
(Note: These issues were disclosed to Bird, and they tell me that fixes have rolled out. I haven't independently verified)

Bird produce a range of rental scooters that are available in multiple markets. With the exception of the Bird Zero[1], all their scooters share a common control board described in FCC filings. The board contains three primary components - a Nordic NRF52 Bluetooth controller, an STM32 SoC and a Quectel EC21-V modem. The Bluetooth and modem are both attached to the STM32 over serial and have no direct control over the rest of the scooter. The STM32 is tied to the scooter's engine control unit and lights, and also receives input from the throttle (and, on some scooters, the brakes).

The pads labeled TP7-TP11 near the underside of the STM32 and the pads labeled TP1-TP5 near the underside of the NRF52 provide Serial Wire Debug, although confusingly the data and clock pins are the opposite way around between the STM and the NRF. Hooking this up via an STLink and using OpenOCD allows dumping of the firmware from both chips, which is where the fun begins. Running strings over the firmware from the STM32 revealed "Set mode to Free Drive Mode". Challenge accepted.

Working back from the code that printed that, it was clear that commands could be delivered to the STM from the Bluetooth controller. The Nordic NRF52 parts are an interesting design - like the STM, they have an ARM Cortex-M microcontroller core. Their firmware is split into two halves, one the low level Bluetooth code and the other application code. They provide an SDK for writing the application code, and working through Ghidra made it clear that the majority of the application firmware on this chip was just SDK code. That made it easier to find the actual functionality, which was just listening for writes to a specific BLE attribute and then hitting a switch statement depending on what was sent. Most of these commands just got passed over the wire to the STM, so it seemed simple enough to just send the "Free drive mode" command to the Bluetooth controller, have it pass that on to the STM and win. Obviously, though, things weren't so easy.

It turned out that passing most of the interesting commands on to the STM was conditional on a variable being set, and the code path that hit that variable had some impressively complicated looking code. Fortunately, I got lucky - the code referenced a bunch of data, and searching for some of the values in that data revealed that they were the AES S-box values. Enabling the full set of commands required you to send an encrypted command to the scooter, which would then decrypt it and verify that the cleartext contained a specific value. Implementing this would be straightforward as long as I knew the key.

Most AES keys are 128 bits, or 16 bytes. Digging through the code revealed 8 bytes worth of key fairly quickly, but the other 8 bytes were less obvious. I finally figured out that 4 more bytes were the value of another Bluetooth variable which could be simply read out by a client. The final 4 bytes were more confusing, because all the evidence made no sense. It looked like it came from passing the scooter serial number to atoi(), which converts an ASCII representation of a number to an integer. But this seemed wrong, because atoi() stops at the first non-numeric value and the scooter serial numbers all started with a letter[2]. It turned out that I was overthinking it and for the vast majority of scooters in the fleet, this section of the key was always "0".

At that point I had everything I need to write a simple app to unlock the scooters, and it worked! For about 2 minutes, at which point the network would notice that the scooter was unlocked when it should be locked and sent a lock command to force disable the scooter again. Ah well.

So, what else could I do? The next thing I tried was just modifying some STM firmware and flashing it onto a board. It still booted, indicating that there was no sort of verified boot process. Remember what I mentioned about the throttle being hooked through the STM32's analogue to digital converters[3]? A bit of hacking later and I had a board that would appear to work normally, but about a minute after starting the ride would cut the throttle. Alternative options are left as an exercise for the reader.

Finally, there was the component I hadn't really looked at yet. The Quectel modem actually contains its own application processor that runs Linux, making it significantly more powerful than any of the chips actually running the scooter application[4]. The STM communicates with the modem over serial, sending it an AT command asking it to make an SSL connection to a remote endpoint. It then uses further AT commands to send data over this SSL connection, allowing it to talk to the internet without having any sort of IP stack. Figuring out just what was going over this connection was made slightly difficult by virtue of all the debug functionality having been ripped out of the STM's firmware, so in the end I took a more brute force approach - I identified the address of the function that sends data to the modem, hooked up OpenOCD to the SWD pins on the STM, ran OpenOCD's gdb stub, attached gdb, set a breakpoint for that function and then dumped the arguments being passed to that function. A couple of minutes later and I had a full transaction between the scooter and the remote.

The scooter authenticates against the remote endpoint by sending its serial number and IMEI. You need to send both, but the IMEI didn't seem to need to be associated with the serial number at all. New connections seemed to take precedence over existing connections, so it would be simple to just pretend to be every scooter and hijack all the connections, resulting in scooter unlock commands being sent to you rather than to the scooter or allowing someone to send fake GPS data and make it impossible for users to find scooters.

In summary: Secrets that are stored on hardware that attackers can run arbitrary code on probably aren't secret, not having verified boot on safety critical components isn't ideal, devices should have meaningful cryptographic identity when authenticating against a remote endpoint.

Bird responded quickly to my reports, accepted my 90 day disclosure period and didn't threaten to sue me at any point in the process, so good work Bird.

(Hey scooter companies I will absolutely accept gifts of interesting hardware in return for a cursory security audit)

[1] And some very early M365 scooters
[2] The M365 scooters that Bird originally deployed did have numeric serial numbers, but they were 6 characters of type code followed by a / followed by the actual serial number - the number of type codes was very constrained and atoi() would terminate at the / so this was still not a large keyspace
[3] Interestingly, Lime made a different design choice here and plumb the controls directly through to the engine control unit without the application processor having any involvement
[4] Lime run their entire software stack on the modem's application processor, but because of [3] they don't have any realtime requirements so this is more straightforward

comments

Ritesh Raj Sarraf: User Mode Linux 5.2

17 October, 2019 - 21:04
User Mode Linux 5.2

User Mode Linux version 5.2 has been uploaded to Debian Unstable and will soon be available on the supported architectures. This upload took more time than usual as I ran into a build time failure caused by newer PCAP library.

Thanks to active upstream developers, this got sorted out quick. In the longer run, we may have a much better fix for it.

What is User Mode Linux a.k.a uml

It is one of the initial virtualization technologies that Linux provided, which still works and is supported/maintained. It is about running a Linux kernel as a single unprivileged user process.

There was a similar project, coLinux, that had a port of Linux kernel to Microsoft Windows, that I used to recommend to people, that were more familiar to Microsoft Windows environment. Conceptually, it was very similar to UML. With coLinux, XMing and PulseAudio, it was a delight to see GNU/Linux based applications to run efficiently on Microsoft Windows.

That was years ago. Unfortunately, the last time I checked on coLinux, it did not seem to be maintained any more. User Mode Linux too hasn’t had a very large user/developer base but, despite, it being part of the Linux kernel has kept it fairly well maintained.

The good news is that for the last 2 years or so, uml has had active maintainers/developers for this project. So I expect to see some parity in terms of features and performance numbers eventually. Things like virtio could definitely bring in a lot of performance improvements to UML. And much wider user acceptance ultimately.

To quote the Debian package description:

Package: user-mode-linux
Version: 5.2-1um-1
Built-Using: linux (= 5.2.17-1)
Status: install ok installed
Priority: extra
Section: kernel
Maintainer: User Mode Linux Maintainers <team+uml@tracker.debian.org>
Installed-Size: 43.7 MB
Depends: libc6 (>= 2.28)
Recommends: uml-utilities (>= 20040406-1)
Suggests: x-terminal-emulator, rootstrap, user-mode-linux-doc, slirp, vde2
Homepage: http://user-mode-linux.sourceforge.net/
Download-Size: unknown
APT-Manual-Installed: yes
APT-Sources: /var/lib/dpkg/status
Description: User-mode Linux (kernel)
 User-mode Linux (UML) is a port of the Linux kernel to its own system
 call interface.  It provides a kind of virtual machine, which runs
 Linux as a user process under another Linux kernel.  This is useful
 for kernel development, sandboxes, jails, experimentation, and many
 other things.
 .
 This package contains the kernel itself, as an executable program,
 and the associated kernel modules.
User Mode Linux networking

I have been maintaining User Mode Linux for Debian for a couple of years now but one area where I still waste a lot of time at times, is networking.

Today, we have these major virt technologies: Virtualizaiton, Containers and UML. For simplicity, I am keeping UML as a separate type.

For my needs, I prefer keeping a single bridge on my laptop, to which all different types of technologies can talk through, to access the network. By keeping everything through the bridge, I get to focus on just one entry/exit point for firewall, monitoring, trouble shooting etc.

As of today, easily:

  • VirtualBox can talk to my local bridge
  • Libvirt/Qemu can talk to my local bridge
  • Docker can too
  • systemd-nspawn can also

The only challenge comes in with UML where in I have had to setup a separate tun interface and make UML talk through it using Tuntap Daemon Mode. And on the host, I do NAT to get it talk to the internet.

Ideally, I would like to simply tell UML that this is my bridge device and that it should associate itself to it for networking. I looked at vde and found a wrapper vdeq. Something like this for UML could simplify things a lot.

UML also has an =vde mode wherein it can attach itself to. But from the looks of it, it is similar to what we already provide through uml-utilities in tuntap daemon mode

I am curious if other User Mode Linux users have simplified ways to get networking set up. And ideally, if it can be setup through Debian’s networking ifup scripts or through systemd-networkd. If so, I would really appreciate if you could drop me a note over email until I find a simple and clean way to get comments setup ready for my blog

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้