Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 32 min 20 sec ago

Craig Small: WordPress 5.0.1

1 hour 30 min ago

While I missed the WordPress 5.0 release, it was only a few more days before there was a security release out.

So WordPress 5.0.1 will be available in Debian soon. This is both a security update from 5.0.1 and a huge feature update from the 4.9.x versions to the 5.0 versions.

The WordPress website, in their 5.0 announcement describe all the changes better, but one of the main things is the new editor (which I’m using as I write this).  It’s certainly cleaner, or perhaps more sparse. I’m not sure if I like it yet.

The security fixes (there are 7) are the usual things you expect from a WordPress security update. The usual XSS and permission problems type stuff.

I have also in the 5.0.1 Debian package removed the build dependency to libphp-phpmailer. The issue with that package is there won’t be any more security updates for the version in Debian. WordPress has an embedded version of it which *I hope* they maintain. There is an issue about the phpmailer in WordPress, so hopefully it gets fixed soon.

Dirk Eddelbuettel: linl 0.0.3: Micro release

2 hours 1 min ago

Our linl package for writing LaTeX letter with (R)markdown had a fairly minor release today, following up on the previous release well over a year ago. This version just contains one change which Mark van der Loo provided a few months ago with a clean PR. As another user was just bitten the same issue when using an included letterhead – which was fixed but unreleased – we decided it was time for a release. So there it is.

linl makes it easy to write letters in markdown, with some extra bells and whistles thanks to some cleverness chiefly by Aaron.

Here is screenshot of the vignette showing the simple input for some moderately fancy output:

The NEWS entry follows:

Changes in linl version 0.0.3 (2018-12-15)
  • Correct LaTeX double loading of package color with different options (Mark van der Loo in #18 fixing #17).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the linl page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Gunnar Wolf: Tecnicatura Universitaria en Software Libre: First bunch of graduates!

3 hours 41 min ago

December starts for our family in Argentina, and in our second day here, I was invited to Facultad de Ingeniería y Ciencias Hídricas (FICH) of Universidad Nacional del Litoral (UNL). FICH-UNL has a short (3 year), distance-learning career called Tecnicatura Universitaria en Software Libre (TUSL).

This career opened in 2015, and today we had the graduation exams for three of its students — It's no small feat for a recently created career to start graduating their first bunch! And we had one for each of TUSL's "exit tracks" (Administration, development and education).

The topics presented by the students were:

  1. An introductory manual for performing migrations and installations of free software-based systems
  2. Design and implementation of a steganography tool for end-users
  3. A Lego system implementation for AppBuilder
  4. I will try to link soon; the TUSL staff is quite well aligned to freedom, transparency and responsibilty.

Ingo Juergensmann: Adding NVMe to Server

7 hours 28 min ago

My server runs on a RAID10 of 4x WD RAID 2 TB disks. Basically those disks are fast enough to cope with the disk load of the virtual machines (VMs). But since many users moved away from Facebook and Google, my Friendica installation on Nerdica.Net has a growing user count putting a large disk I/O load with many small reads & writes on the disks, resulting a slowing down the general disk I/O for all the VMs and the server itself. On mdraid-sync-Sunday this month the server needed two full days to sync its RAID10.

So the idea was to remove the high disk I/O load from the rotational disks the something different. For that reason I bought a Samsung Pro 970 512 GB NVMe disk and a matching PCIe 3.0 card to be put into my server in the colocation. On Thursday the Samsung has been installed by the rrbone staff in the colocation. I moved the PostgreSQL and MySQL databases from the RAID10 to the NVMe disk and restarted services again.

Here are some results from Munin monitoring: 

Disk Utilization

Here you can see how the disk utilization dropped after NVMe installation. The red coloured bar symbolizes the average utilization on RAID10 disks and the green bar symbolizes the same RAID10 after the databases were moved to the NVMe disk. There's roughly 20% less utilization now, whch is good.

Disk Latency

Here you can see the same coloured bars for the disk latency. As you can see the latency dropped by 1/3 now.

CPU I/O wait

The most significant graph is maybe the CPU graph where you can see a large portion of iowait of the CPUs. This is no longer true as there is apparently no significant iowait anymore thanks to the low latency and high IOPS nature of SSD/NVMe disks.

Overall I cannot confirm that adding the NVMe disk results in a significant faster page load of Friendica or Mastodon, maybe because other measurements like Redis/Memcached or pgbouncer already helped a lot before the NVMe disk, but it helps a lot with general disk I/O load and improving disk speeds inside of the VMs, like for my regular backups and such.

Ah, one thing to report is: in a quick test pgbench reported >2200 tps on NVMe now. That at least is a real speed improvement, maybe by order of 10 or so.

Kategorie: DebianTags: DebianServerHardware

Petter Reinholdtsen: Learn to program with Minetest on Debian

15 December, 2018 - 21:30

A fun way to learn how to program Python is to follow the instructions in the book "Learn to program with Minecraft", which introduces programming in Python to people who like to play with Minecraft. The book uses a Python library to talk to a TCP/IP socket with an API accepting build instructions and providing information about the current players in a Minecraft world. The TCP/IP API was first created for the Minecraft implementation for Raspberry Pi, and has since been ported to some server versions of Minecraft. The book contain recipes for those using Windows, MacOSX and Raspian. But a little known fact is that you can follow the same recipes using the free software construction game Minetest.

There is a Minetest module implementing the same API, making it possible to use the Python programs coded to talk to Minecraft with Minetest too. I uploaded this module to Debian two weeks ago, and as soon as it clears the FTP masters NEW queue, learning to program Python with Minetest on Debian will be a simple 'apt install' away. The Debian package is maintained as part of the Debian Games team, and the packaging rules are currently located under 'unfinished' on Salsa.

You will most likely need to install several of the Minetest modules in Debian for the examples included with the library to work well, as there are several blocks used by the example scripts that are provided via modules in Minetest. Without the required blocks, a simple stone block is used instead. My initial testing with a analog clock did not get gold arms as instructed in the python library, but instead used stone arms.

I tried to find a way to add the API to the desktop version of Minecraft, but were unable to find any working recipes. The recipes I found are only working with a standalone Minecraft server setup. Are there any options to use with the normal desktop version?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Norbert Preining: TeX Live/Debian updates 20181214

15 December, 2018 - 12:58

Another month passed, and the (hoepfully) last upload for this year brings updates to the binaries, to the usual big set of macro and font packages, and some interesting and hopefully useful changes.

The version for the binary packages is 2018.20181214.49410-1 and is based on svn revision 49410. That means we get:

  • support some primitives from pdftex in xetex
  • dviout updates
  • kpathsea change for brace expansion
  • various memory fixes, return value fixes etc

The version of the macro/font packages is 2018.20181214-1 and contains the usual set of updated and new packages, see below for the complete list. There is one interesting functional change, we have replaced the . in the various TEXMF* variables with TEXMFDOTDIR, which in turn is defined to .. By itself no functional changes, but it allows users to redefine TEXMFDOTDIR to say .// for certain projects, which will make kpathsea automatically find all files in the current directory and below.

Now for the full list of updates and new packages. Enjoy!

New packages

cweb-old, econ-bst, eqexpl, icon-appr, makecookbook, modeles-factures-belges-assocs, pdftex-quiet, pst-venn, tablvar, tikzlings, xindex.

Updated packages

a2ping, abnt, abntex2, acmart, acrotex, addlines, adigraph, amsmath, animate, auto-pst-pdf-lua, awesomebox, babel, babel-german, beamer, beamertheme-focus, beebe, bib2gls, biblatex-abnt, biblatex-archaeology, biblatex-ext, biblatex-gb7714-2015, biblatex-publist, bibleref, bidi, censor, changelog, colortbl, convbkmk, covington, cryptocode, cweb, dashundergaps, datatool, datetime2-russian, datetime2-samin, diffcoeff, dynkin-diagrams, ebgaramond, enumitem, fancyvrb, glossaries-extra, grabbox, graphicxsp, hagenberg-thesis, hyperref, hyperxmp, ifluatex, japanese-otf-uptex, japanese-otf-uptex-nonfree, jigsaw, jlreq, jnuexam, jsclasses, ketcindy, knowledge, kpathsea, l3build, l3experimental, l3kernel, latex, latex2man, libertinus-otf, lipsum, luatexko, lwarp, mcf2graph, memoir, modeles-factures-belges-assocs, oberdiek, pdfx, platex, platex-tools, plautopatch, polexpr, pst-eucl, pst-fractal, pst-func, pst-math, pst-moire, pstricks, pstricks-add, ptex2pdf, quran, rec-thy, register, reledmac, rutitlepage, sourceserifpro, srdp-mathematik, suftesi, svg, tcolorbox, tex4ht, texdate, thesis-ekf, thesis-qom, tikz-cd, tikzducks, tikzmarmots, tlshell, todonotes, tools, toptesi, ucsmonograph, univie-ling, uplatex, widows-and-orphans, witharrows, xepersian, xetexref, xindex, xstring, xurl, zhlineskip.

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2018

15 December, 2018 - 05:50

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In November, about 209 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

In November we had a few more hours available to dispatch than we had contributors willing and able to do the work and thus we are actively looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The number of sponsored hours stayed the same at 212 hours per month but we actually lost two sponsors and gained a new one (silver level).

The security tracker currently lists 30 packages with a known CVE and the dla-needed.txt file has 32 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Eddy Petrișor: rust for cortex-m7 baremetal

15 December, 2018 - 04:20
Update 14 December 2018: After the release of stable 1.31.0 (aka 2018 Edition), it is no longer necessary to switch to the nightly channel to get access to thumb7em-none-eabi / Cortex-M4 and Cortex-M7 components. Updated examples and commands accordingly.
For more details on embedded development using Rust, the official Rust embedded docs site is the place to go, in particular, you can start with The embedded Rust book.
 
 
This is a reminder for myself, if you want to install Rust for a baremetal Cortex-M7 target, this seems to be a tier 3 platform:

https://forge.rust-lang.org/platform-support.html

Higlighting the relevant part:

Target std rustc cargo notes ... msp430-none-elf * 16-bit MSP430 microcontrollers sparc64-unknown-netbsd ✓ ✓ NetBSD/sparc64 thumbv6m-none-eabi * Bare Cortex-M0, M0+, M1 thumbv7em-none-eabi *

Bare Cortex-M4, M7 thumbv7em-none-eabihf * Bare Cortex-M4F, M7F, FPU, hardfloat thumbv7m-none-eabi * Bare Cortex-M3 ... x86_64-unknown-openbsd ✓ ✓ 64-bit OpenBSD
In order to enable the relevant support, use the nightly build and use stable >= 1.31.0 and add the relevant target:
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)eddy@feodora:~/usr/src/rust$ rustup show
Default host: x86_64-unknown-linux-gnu

stable-x86_64-unknown-linux-gnu (default)
rustc 1.31.0 (abe02cefd 2018-12-04)
If not using nightly, switch to that:


eddy@feodora:~/usr/src/rust-uc$ rustup default nightly-x86_64-unknown-linux-gnu
info: using existing install for 'nightly-x86_64-unknown-linux-gnu'
info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Add the needed target:
eddy@feodora:~/usr/src/rust$ rustup target add thumbv7em-none-eabi
info: downloading component 'rust-std' for 'thumbv7em-none-eabi'
info: installing component 'rust-std' for 'thumbv7em-none-eabi'
eddy@feodora:~/usr/src/rust$ rustup show
Default host: x86_64-unknown-linux-gnu

installed targets for active toolchain
--------------------------------------

thumbv7em-none-eabi
x86_64-unknown-linux-gnu

active toolchain
----------------

stable-x86_64-unknown-linux-gnu (default)
rustc 1.31.0 (abe02cefd 2018-12-04)Then compile with --target.

Ben Hutchings: Debian LTS work, November 2018

15 December, 2018 - 02:18

I was assigned 20 hours of work by Freexian's Debian LTS initiative and worked all those hours.

I prepared and released another stable update for Linux 3.16 (3.16.61), but have not yet included this in a Debian upload.

I updated the firmware-nonfree package to fix security issues in wifi firmware and to provide additional firmware that may be requested by drivers in Linux 4.9. I issued DLA 1573-1 to describe this update.

I worked on documenting a bug in the installer that delays installation of updates to the kernel. The installer can only be updated as part of a point release, and these are not done after the LTS team takes on responsibility for a release. Laura Arjona drafted an addition to the Errata section of the official jessie installer page and I reviewed this. I also updated the LTS/Installing wiki page.

I also participated in various discussions on the debian-lts mailing list.

Molly de Blanc: The OSD and user freedom

14 December, 2018 - 02:50
Some background reading

The relationship between open source and free software is fraught with people arguing about meanings and value. In spite of all the things we’ve built up around open source and free software, they reduce down to both being about software freedom.

Open source is about software freedom. It has been the case since “open source” was created.

In 1986 the Four Freedoms of Free Software (4Fs) were written. In 1998 Netscape set its source code free. Later that year a group of people got together and Christine Peterson suggested that, to avoid ambiguity, there was a “need for a better name” than free software. She suggested open source after open source intelligence. The name stuck and 20 years later we argue about whether software freedom matters to open source, because too many global users of the term have forgotten (or never knew) that some people just wanted another way to say software that ensures the 4Fs.

Once there was a term, the term needed a formal definition: how to we describe what open source is? That’s where the Open Source Definition (OSD) comes in.

The OSD is a set of ten points that describe what an open source license looks like. The OSD came from the Debian Free Software Guidelines. The DFSG themselves were created to “determine if a work is free” and ought to be considered a way of describing the 4Fs.

Back to the present

I believe that the OSD is about user freedom. This is an abstraction from “open source is about free software.” As I eluded to earlier, this is an intuition I have, a thing I believe, and an argument I’m have a very hard time trying to make.

I think of free software as software that exhibits or embodies software freedom — it’s software created using licenses that ensure the things attached to them protect the 4Fs. This is all a tool, a useful tool, for protecting user freedom.

The line that connects the OSD and user freedom is not a short one: the OSD defines open source -> open source is about software freedom -> software freedom is a tool to protect user freedom. I think this is, however, a very valuable reduction we can make. The OSD is another tool in our tool box when we’re trying to protect the freedom of users of computers and computing technology.

Why does this matter (now)?

I would argue that this has always mattered, and we’ve done a bad job of talking about it. I want to talk about this now because its become increasingly clear that people simply never understood (or even heard of) the connection between user freedom and open source.

I’ve been meaning to write about this for a while, and I think it’s important context for everything else I say and write about in relation to the philosophy behind free and open source software (FOSS).

FOSS is a tool. It’s not a tool about developmental models or corporate enablement — though some people and projects have benefited from the kinds of development made possible through sharing source code, and some companies have created very financially successful models based on it as well. In both historical and contemporary contexts, software freedom is at the heart of open source. It’s not about corporate benefit, it’s not about money, and it’s not even really about development. Methods of development are tools being used to protect software freedom, which in turn is a tool to protect user freedom. User freedom, and what we get from that, is what’s valuable.

Side note

At some future point, I’ll address why user freedom matters, but in the mean time, here are some talks I gave (with Karen Sandler) on the topic.

Joachim Breitner: Thoughts on bootstrapping GHC

13 December, 2018 - 20:02

I am returning from the reproducible builds summit 2018 in Paris. The latest hottest thing within the reproducible-builds project seems to be bootstrapping: How can we build a whole operating system from just and only source code, using very little, or even no, binary seeds or auto-generated files. This is actually concern that is somewhat orthogonal to reproducibility: Bootstrappable builds help me in trusting programs that I built, while reproducible builds help me in trusting programs that others built.

And while they make good progress bootstrapping a full system from just a C compiler written in Scheme, and a Scheme interpreter written in C, that can build each other (Janneke’s mes project), and there are plans to build that on top of stage0, which starts with a 280 bytes of binary, the situation looks pretty bad when it comes to Haskell.

Unreachable GHC

The problem is that contemporary Haskell has only one viable implementation, GHC. And GHC, written in contemporary Haskell, needs GHC to be build. So essentially everybody out there either just downloads a binary distribution of GHC. Or they build GHC from source, using a possibly older (but not much older) version of GHC that they already have. Even distributions like Debian do nothing different: When they build the GHC package, the builders use, well, the GHC package.

There are other Haskell implementations out there. But if they are mature and active developed, then they are implemented in Haskell themselves, often even using advanced features that only GHC provides. And even those are insufficient to build GHC itself, let alone the some old and abandoned Haskell implementations.

In all these cases, at some point an untrusted binary is used. This is very unsatisfying. What can we do? I don’t have the answers, but please allow me to outline some venues of attack.

Retracing history

Obviously, even GHC does not exist since the beginning of time, and the first versions surely were built using something else than GHC. The oldest version of GHC for which we can find a release on the GHC web page is version 0.29 from July 1996. But the installation instructions write:

GHC 0.26 doesn't build with HBC. (It could, but we haven't put in the effort to maintain it.)

GHC 0.26 is best built with itself, GHC 0.26. We heartily recommend it. GHC 0.26 can certainly be built with GHC 0.23 or 0.24, and with some earlier versions, with some effort.

GHC has never been built with compilers other than GHC and HBC.

HBC is a Haskell compiler where we find the sources of one random version only thanks to archive.org. It is written in C, so that should be the solution: Compile HBC, use it to compile GHC-0.29, and then step for step build every (major) version of GHC until today.

The problem is that it is non-trivial to build software from the 90s using today's compilers. I briefly looked at the HBC code base, and had to change some files from using varargs.h to stdargs.v, and this is surely just one of many similar stumbling blocks trying to build that tools. Oh, and even the hbc source state

# To get everything done: make universe
# It is impossible to make from scratch.
# You must have a running lmlc, to
# recompile it (of course).

At this point I ran out of time.

Going back, but doing it differently

Another approach is to go back in time, to some old version of GHC, but maybe not all the way to the beginning, and then try to use another, officially unsupported, Haskell compiler to build GHC. This is what rekado tried to do in 2017: He use the most contemporary implementation of Haskell in C, the Hugs interpreter. Using this, he compiled nhc98 (yet another abandoned Haskell implementation), with the hope of building GHC with nhc98. He made impressive progress back then, but ran into a problem where the runtime crashed. Maybe someone is interested in picking up up from there?

Removing, simplifying, extending, in the present.

Both approaches so far focus on building an old version of GHC. This adds complexity: other tools (the shell, make, yacc etc.) may behave different now in a way that causes hard to debug problems. So maybe it is more fun and more rewarding to focus on today’s GHC? (At this point I am starting to hypothesize).

I said before that no other existing Haskell implementation can compile today’s GHC code base, because of features like mutually recursive modules, the foreign function interface etc. And also other existing Haskell implementations often come with a different, smaller set of standard libraries, but GHC assumes base, so we would have to build that as well...

But we don’t need to build it all. Surely there is much code in base that is not used by GHC. Also, much code in GHC that we do not need to build GHC, and . So by removing that, we reduce the amount of Haskell code that we need to feed to the other implementation.

The remaining code might use some features that are not supported by our bootstrapping implementation. Mutually recursive module could be manually merged. GADTs that are only used for additional type safety could be replaced by normal ones, which might make some pattern matches incomplete. Syntactic sugar can be desugared. By simplifying the code base in that way, one might be able a fork of GHC that is within reach of the likes of Hugs or nhc98.

And if there are features that are hard to remove, maybe we can extend the bootstrapping compiler or interpreter to support them? For example, it was mostly trivial to extend Hugs with support for the # symbol in names -- and we can be pragmatic and just allow it always, since we don’t need a standards conforming implementation, but merely one that works on the GHC code base. But how much would we have to implement? Probably this will be more fun in Haskell than in C, so maybe extending nhc98 would be more viable?

Help from beyond Haskell?

Or maybe it is time to create a new Haskell compiler from scratch, written in something other than Haskell? Maybe some other language that is reasonably pleasant to write a compiler in (Ocaml? Scala?), but that has the bootstrappability story already sorted out somehow.

But in the end, all variants come down to the same problem: Writing a Haskell compiler for full, contemporary Haskell as used by GHC is hard and really a lot of work -- if it were not, there would at least be implementations in Haskell out there. And as long as nobody comes along and does that work, I fear that we will continue to be unable to build our nice Haskell ecosystem from scratch. Which I find somewhat dissatisfying.

Junichi Uekawa: Already December.

13 December, 2018 - 18:39
Already December. Nice. I tried using tramp for a while but I am back to mosh. tramp is not usable when ssh connection is not reliable.

Keith Packard: newt

13 December, 2018 - 14:55
Newt: A Tiny Embeddable Python Subset

I've been helping teach robotics programming to students in grades 5 and 6 for a number of years. The class uses Lego models for the mechanical bits, and a variety of development environments, including Robolab and Lego Logo on both Apple ][ and older Macintosh systems. Those environments are quite good, but when the Apple ][ equipment died, I decided to try exposing the students to an Arduino environment so that they could get another view of programming languages.

The Arduino environment has produced mixed results. The general nature of a full C++ compiler and the standard Arduino libraries means that building even simple robots requires a considerable typing, including a lot of punctuation and upper case letters. Further, the edit/compile/test process is quite long making fixing errors slow. On the positive side, many of the students have gone on to use Arduinos in science research projects for middle and upper school (grades 7-12).

In other environments, I've seen Python used as an effective teaching language; the direct interactive nature invites exploration and provides rapid feedback for the students. It seems like a pretty good language to consider for early education -- "real" enough to be useful in other projects, but simpler than C++/Arduino has been. However, I haven't found a version of Python that seems suitable for the smaller microcontrollers I'm comfortable building hardware with.

How Much Python Do We Need?

Python is a pretty large language in embedded terms, but there's actually very little I want to try and present to the students in our short class (about 6 hours of language introduction and another 30 hours or so of project work). In particular, all we're using on the Arduino are:

  • Numeric values
  • Loops and function calls
  • Digital and analog I/O

Remembering my childhood Z-80 machine with its BASIC interpreter, I decided to think along those lines in terms of capabilities. I think I can afford more than 8kB of memory for the implementation, and I really do want to have "real" functions, including lexical scoping and recursion.

I'd love to make this work on our existing Arduino Duemilanove compatible boards. Those have only 32kB of flash and 2kB of RAM, so that might be a stretch...

What to Include

Exploring Python, I think there's a reasonable subset that can be built here. Included in that are:

  • Lists, numbers and string types
  • Global functions
  • For/While/If control structures.
What to Exclude

It's hard to describe all that hasn't been included, but here's some major items:

  • Objects, Dictionaries, Sets
  • Comprehensions
  • Generators (with the exception of range)
  • All numeric types aside from single-precision float
Implementation

Newt is implemented in C, using flex and bison. It includes the incremental mark/sweep compacting GC system I developed for my small scheme interpreter last year. That provides a relatively simple to use and efficient memory system.

The Newt “Compiler”

Instead of directly executing a token stream as my old BASIC interpreter did, Newt is compiling to a byte coded virtual machine. Of course, we have no memory, so we don't generate a parse tree and perform optimizations on that. Instead, code is generated directly in the grammar productions.

The Newt “Virtual Machine”

With the source compiled to byte codes, execution is pretty simple -- read a byte code, execute some actions related to it. To keep things simple, the virtual machine has a single accumulator register and a stack of other values.

Global and local variables are stored in 'frames', with each frame implemented as a linked list of atom/value pairs. This isn't terribly efficient in space or time, but was quick to implement the required Python semantics for things like 'global'.

Lists and tuples are simple arrays in memory, just like C Python. I use the same sizing heuristic for lists that Python does; no sense inventing something new for that. Strings are C strings.

When calling a non-builtin function, a new frame is constructed that includes all of the formal names. Those get assigned values from the provided actuals and then the instructions in the function are executed. As new locals are discovered, the frame is extended to include them.

Testing

Any new language implementation really wants to have a test suite to ensure that the desired semantics are implemented correctly. One huge advantage for Newt is that we can cross-check the test suite by running it with Python.

Current Status

I think Newt is largely functionally complete at this point; I just finished adding the limited for statement capabilities this evening. I'm sure there are a lot of bugs to work out, and I expect to discover additional missing functionality as we go along.

I'm doing all of my development and testing on my regular x86 laptop, so I don't know how big the system will end up on the target yet.

I've written 4836 lines of code for the implementation and another 65 lines of Python for simple test cases. When compiled -Os for x86_64, the system is about 36kB of text and another few bytes of initialized data.

Links

The source code is available from my server at https://keithp.com/cgit/newt.git/, and also at github https://github.com/keith-packard/newt. It is licensed under the GPLv2 (or later version).

Petter Reinholdtsen: Non-blocking bittorrent plugin for vlc

12 December, 2018 - 13:20

A few hours ago, a new and improved version (2.4) of the VLC bittorrent plugin was uploaded to Debian. This new version include a complete rewrite of the bittorrent related code, which seem to make the plugin non-blocking. This mean you can actually exit VLC even when the plugin seem to be unable to get the bittorrent streaming started. The new version also include support for filtering playlist by file extension using command line options, if you want to avoid processing audio, video or images. The package is currently in Debian unstable, but should be available in Debian testing in two days. To test it, simply install it like this:

apt install vlc-plugin-bittorrent

After it is installed, you can try to use it to play a file downloaded live via bittorrent like this:

vlc https://archive.org/download/Glass_201703/Glass_201703_archive.torrent

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Matthew Palmer: Falsehoods Programmers Believe About Pagination

12 December, 2018 - 07:00

The world needs it, so I may as well write it.

  • The number of items on a page is fixed for all time.
  • The number of items on a page is fixed for one user.
  • The number of items on a page is fixed for one result set.
  • The pages are only browsed in one direction.
  • No item will be added to the result set during retrieval.
  • No item will be removed from the result set during retrieval.
  • Item sort order is stable.
  • Only one page of results will be retrieved at one time.
  • Pages will be retrieved in order.
  • Pages will be retrieved in a timely manner.

Louis-Philippe Véronneau: Montreal Bug Squashing Party - Jan 19th & 20th 2019

12 December, 2018 - 06:15

We are organising a BSP in Montréal in January! Unlike the one we organised for the Stretch release, this one will be over a whole weekend so hopefully folks from other provinces in Canada and from the USA can come.

So yeah, come and squash bugs with us! Montreal in January can be cold, but it's usually snowy and beautiful too.

As always, the Debian Project is willing to reimburse 100 USD (or equivalent) of expenses to attend Bug Squashing Parties. If you can find a cheap flight or want to car pool with other people that are interested, going to Montréal for a weekend doesn't sound that bad, eh?

When: January 19th and 20th 2019

Where: Montréal, Eastern Bloc

Why: to squash bugs!

Reproducible builds folks: Reproducible Builds: Weekly report #189

11 December, 2018 - 23:01

Here’s what happened in the Reproducible Builds effort between Sunday December 2 and Saturday December 8 2018:

Packages reviewed and fixed, and bugs filed Test framework development

There were a number of updates to our Jenkins-based testing framework that powers tests.reproducible-builds.org this week, including:

  • Chris Lamb:
    • Re-add support for calculating a PureOS package set. (MR: 115)
    • Support arbitrary package filters when generating deb822 output. (MR: 22)
    • Add missing DBDJSON_PATH import. (MR: 21)
    • Correct Tails’ build manifest URL. (MR: 20)
  • Holger Levsen:
    • Ignore disk full false-positives building the GNU C Library. []
    • Various node maintenance. (eg. [], [], etc.)
    • Exclusively use the database to track blacklisted packages in Arch Linux. []

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Muz & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Bits from Debian: Debian Cloud Sprint 2018

11 December, 2018 - 18:30

The Debian Cloud team held a sprint for the third time, hosted by Amazon at its Seattle offices from October 8th to October 10th, 2018.

We discussed the status of images on various platforms, especially in light of moving to FAI as the only method for building images on all the cloud platforms. The next topic was building and testing workflows, including the use of Debian machines for building, testing, storing, and publishing built images. This was partially caused by the move of all repositories to Salsa, which allows for better management of code changes, especially reviewing new code.

Recently we have made progress supporting cloud usage cases; grub and kernel optimised for cloud images help with reducing boot time and required memory footprint. There is also growing interest in non-x86 images, and FAI can now build such images.

Discussion of support for LTS images, which started at the sprint, has now moved to the debian-cloud mailing list). We also discussed providing many image variants, which requires a more advanced and automated workflow, especially regarding testing. Further discussion touched upon providing newer kernels and software like cloud-init from backports. As interest in using secure boot is increasing, we might cooperate with other team and use work on UEFI to provide images signed boot loader and kernel.

Another topic of discussion was the management of accounts used by Debian to build and publish Debian images. SPI will create and manage such accounts for Debian, including user accounts (synchronised with Debian accounts). Buster images should be published using those new accounts. Our Cloud Team delegation proposal (prepared by Luca Fillipozzi) was accepted by the Debian Project Leader. Sprint minutes are available, including a summary and a list of action items for individual members.

Dirk Eddelbuettel: RQuantLib 0.4.7: Now with corrected Windows library

11 December, 2018 - 17:47

A new version 0.4.7 of RQuantLib reached CRAN and Debian. Following up on the recent 0.4.6 release post which contained a dual call for help: RQuantLib was (is !!) still in need of a macOS library build, but also experienced issues on Windows.

Since then we set up a new (open) mailing list for RQuantLib and, I am happy to report, sorted that Windows issue out! In short, with the older g++ 4.9.3 imposed for R via Rtools, we must add an explicit C++11 flag at configuration time. Special thanks to Josh Ulrich for tireless and excellent help with testing these configurations, and to everybody else on the list!

QuantLib is a very comprehensice free/open-source library for quantitative finance, and RQuantLib connects it to the R environment and language.

This release re-enable most examples and tests that were disabled when Windows performance was shaky (due to, as we now know, as misconfiguration of ours for the windows binary library used). With the exception of the AffineSwaption example when running Windows i386, everything is back!

The complete set of changes is listed below:

Changes in RQuantLib version 0.4.7 (2018-12-10)
  • Changes in RQuantLib tests:

    • Thanks to the updated #rwinlib/quantlib Windows library provided by Josh, all tests that previously exhibited issues have been re-enabled (Dirk in #126).
  • Changes in RQuantLib documentation:

    • The CallableBonds example now sets an evaluation date (#124).

    • Thanks to the updated #rwinlib/quantlib Windows library provided by Josh, examples that were set to dontrun are re-activated (Dirk in #126). AffineSwaption remains the sole holdout.

  • Changes in RQuantLib build system:

    • The src/Makevars.win file was updated to reflect the new layout used by the upstream build.

    • The -DBOOST_NO_AUTO_PTR compilation flag is now set.

As stated above, we are still looking for macOS help though. Please get in touch on-list if you can help build a library for Simon’s recipes repo.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Julien Danjou: Podcast.__init__: Gnocchi, a Time Series Database for your Metrics

11 December, 2018 - 16:50

A few weeks ago, Tobias Macey contacted me as he wanted to talk about Gnocchi, the time series database I've been working on for the last few years.

It was a great opportunity to talk about the project, so I jumped on it! We talk about how Gnocchi came to life, how we built its architecture, the challenges we met, what kind of trade-off we made, etc.

You can list to this episode here.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้