Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 30 min 19 sec ago

Mirco Bauer: Ethereum GPU Mining on Linux How-To

7 August, 2016 - 05:35
TL;DR

Install/use Debian 8 or Ubuntu 16.0.4 then execute:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update
sudo apt-get install ethereum ethminer
geth account new
# copy long character sequence within {}, that is your <YOUR_WALLET_ADDRESS>
# if you lose the passphrase, you lose your coins!
sudo apt-get install linux-headers-amd64 build-essential
chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run
ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200
echo done
My Attention Span is > 60 seconds

Ethereum is a crypto currency similar to Bitcoin as it is based on the blockchain technology. Ethereum is not yet another Bitcoin clone though, since it has an additional feature called Smart Contracts that makes it unique and very promising. I am not going into details how Ethereum works, you can get that into great detail on the Internet. This post is about Ethereum mining. Mining is how crypto coins are created. You need to spent computing time to get coins out. At the beginning CPU mining was sufficient, but as the Ethereum network difficulty has increased you need to use GPUs as they can calculate at a much higher hashrate than a general purpose CPU can do.

About 2 months ago I bought a new gaming rig, with a Nvidia GTX 1070 so I can experience virtual-reality gaming with a HTC Vive at a great framerate. As it turns out modern graphics cards are very good at hashing so I gave it a spin.

Initially I did this mining setup with Windows 10, as that is the operating system on my gaming rig. If you want to do Ethereum mining using your GPU, then you really want to use Linux. On Windows the GTX 1070 produced a hashrate of 6 MH/s (megahashes per second) while the same hardware does 25 MH/s on Linux. The hashrate multiplied by 4 by using Linux instead of Windows. Sounds good? Keep reading and follow this guide.

You have to pick a Linux distro to use for mining. As I am a Debian developer, all my systems run Debian, which is what I am also using for this guide. The same procedure can be done for Ubuntu as it is similar enough. For other distros you have to substitute the steps yourself. So I assume you already have Debian 8 or Ubuntu 16.04 installed on your system.

Install Ethereum Software

First we need the geth tool which is the main Ethereum "client". Ethereum is really a peer-to-peer network, that means each node is a server and client at the same time. A node that contains the complete blockchain history in a database is called a full node. For this guide you don't need to run a full node, as mining pools do this for you. We still need geth to create the private key of your Ethereum wallet. Somewhere we have to receive the coins we are mining

Add the Ethereum APT repository using these commands:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo apt-get update

On Debian 8 (on Ubuntu you can skip this) you need to replace the repository name with this command:

sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update

Install ethereum, ethminer and geth:

sudo apt-get install ethereum ethminer geth
Create Ethereum Wallet

A wallet is where coins are "stored". They are not really stored in the wallet because the wallet is just a private key that nobody has. The balance of that wallet is visible to everyone using the blockchain database. And this is what full nodes do, they contain and distribute the database to all other peers. So this this command to create your first private key for your wallet:

geth account new

Be aware, that this passphrase protects the private key of your wallet. Anyone who has access to that file and knows your passphrase will have full control over your coins. And also do not forget the passphrase, as if you do, you lost all your coins!

The output of "geth account new" shows a long character/number sequence quoted in {}. This is your wallet address and you should write that number down, as if someone wants to send you money, then it is to that address. We will use that for the mining pool later.

Install (proprietary) nvidia driver

For OpenCL to work with nvidia graphics cards, like my GTX 1070, you need to install this proprietary driver from nvidia. If you have an older card maybe the opensource drivers will work for you. For the nvidia pascal cards numbers 10xx you will need this driver package.

After you have agreed the terms, download the NVIDIA-Linux-x86_64-367.35.run file. But before we can use that installer we need to install some dependencies that installer needs as it will have to compile a Linux kernel module for you. Install the dependencies using this command:

sudo apt-get install linux-headers-amd64 build-essential

Now we can make the installer executable and run it like this:

chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run

If that step completed without error, then we should be able to run the mining benchmark!

ethminer -M -G

The -M means "run benchmark" and the -G is for GPU mining. The first time you run it it will create a DAG file and that will takes a while. For me it took about 12 minutes on my GTX 1070. After that is should show a inner mean hashrate. If it says H/s that is hashes per second and KH is kilo (H/1000) and MH is megahashes per second (KH/1000). I had numbers around 25-30 MH/s, but for real mining you will see an average that is a balanced number and not a min/max range.

Pick Ethereum Network

Now it gets serious, you need to decide 2 things. First which Ethereum network you want to mine for and the second is using which pool.

Ethereum has 2 networks, one is called Ethereum One or Core, while the other is called Ethereum Classic. Ethereum has made a hardfork to undo the consequences of a software bug in the DAO. The DAO is a smart contract for a decentralized organization. Because of that bug, a blackhat could use that bug to obtain money from that DAO. The Ethereum developers made a poll and decided that the consequences will be undone. Not everyone agreed and the old network stayed alive and is now called Ethereum Classic short ETC. The hardfork kept its short name ETH.

This is important to understand for mining, because the hashing difficulty has a huge difference between ETH and ETC. As of writing, the hashrate of ETC is at 20% compared to ETH. Thus you need less computing time to get ETC coins and more time to get ETH coins. Differently said, ETC mining is currently more profitable.

Pick a Pool

Hmmmm, I want a swimming pool, thanks! Just kidding... You can mine without a pool, that is called solo mining, but you will get less reward. A mining pool are multiple computers that work on the same block to find a solution quicker than others. The pool has an aggregated hashrate that is higher than other solo miners. Each found block by anyone in this pool will be rewarded to everyone in the pool. The reward of 5 ether currently per block gets split in the same ratio of hashrate each member provides (minus the pool fee). So while you get less for a found block, you still have a steady lower income rate instead of higher with less chance of finding one (in time). Simply said: you have to find a new block faster than the others to receive the reward.

If you want to mine Ethereum Classic (ETC) use one of the pools listed here (at the bottom of the page).

If you want to mine Ethereum One / Core (ETH) use one of the pools listed here.

Run ethminer

The instruction page of the pool website usually says how to start the miner program, but here is an example of the pool that I use (because pony!):

ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200
Profit

If this guide was helpful for you, you can tip me at ethereum:0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 or simply run the ethminer using my wallet address for a day or two:

ethminer -G -F http://yolo.ethclassic.faith:9999/0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 --farm-recheck 200

Happy mining!

Robert Edmonds: Cable modems: Arris SB6190 vs. Netgear CM600

7 August, 2016 - 04:52

Recently I activated new cable ISP service at my home and needed to purchase a cable modem. There were only a few candidate devices that supported at least 24 downstream channels (preferably 32), and did not contain an integrated router or access point.

The first modem I tried was the Arris SB6190, which supports 32 downstream channels. It is based on the Intel Puma 6 SoC, and looking at an older release of the SB6190 firmware source reveals that the device runs Linux. This device, running the latest 9.1.93N firmware, goes into a failure mode after several days of uptime which causes it to drop 1-2% of packets. Here is a SmokePing graph that measures latency to my ISP's recursive DNS server, showing the transition into the “degraded” mode:

It didn't drop packets at random, though. Some traffic would be deterministically dropped, such as the parallel A/AAAA DNS lookups generated by the glibc DNS stub resolver. For instance, in the following tcpdump output:

[1] 17:31:46.989073 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[2] 17:31:46.989102 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[3] 17:31:47.020423 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., […]
[4] 17:31:51.993680 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[5] 17:31:52.025138 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., […]
[6] 17:31:52.025282 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[7] 17:31:52.056550 IP 75.75.75.75.53 > [My IP].50775: 14987 2/0/0 CNAME comcast6.g.comcast.net., […]

Packets [1] and [2] are the A and AAAA queries being initiated in parallel. Note that they both use the same 4-tuple of (Source IP, Destination IP, Source Port, Destination Port), but with different DNS IDs. Packet [3] is the response to packet [1]. The response to packet [2] never arrives, and five seconds later, the glibc stub resolver times out and retries in single-request mode, which performs the A and AAAA queries sequentially. Packets [4] and [5] are the type A query and response, and packets [6] and [7] are the AAAA query and response.

The Arris SB6190 running firmware 9.1.93N would consistently interfere with these parallel DNS requests, but only when operating in its “degraded” mode. It also didn't matter whether glibc was configured to use an IPv4 or IPv6 nameserver, or which nameserver was being used. Power cycling the modem would fix the issue for a few days.

My ISP offered to downgrade the firmware on the Arris SB6190 to version 9.1.93K. This firmware version doesn't go into a degraded mode after a few days, but it does exhibit higher latency, and more jitter:

It seemed unlikely that Arris would fix the firmware issues in the SB6190 before the end of my 30-day return window, so I returned the SB6190 and purchased a Netgear CM600. This modem appears to be based on the Broadcom BCM3384 and looking at an older release of the CM600 firmware source reveals that the device runs the open source eCos embedded operating system.

The Netgear CM600 so far hasn't exhibited any of the issues I found with the Arris SB6190 modem. Here is a SmokePing graph for the CM600, which shows median latency about 1 ms lower than the Arris modem:

It's not clear which company is to blame for the problems in the Arris modem. Looking at the DOCSIS drivers in the SB6190 firmware source reveals copyright statements from ARRIS Group, Texas Instruments, and Intel. However, I would recommend avoiding cable modems produced by Arris in particular, and cable modems based on the Intel Puma SoC in general.

Norbert Preining: Debian/TeX Live 2016.20160805-1

7 August, 2016 - 04:31

TUG 2016 is over, and I have returned from a wonderful trip to Toronto and Maine. High time to release a new checkout of the TeX Live packages. After that I will probably need some time for another checkout, as there are a lot of plans on the table: upstream created a new collection, which means new package in Debian, which needs to go through NEW, and I am also planning to integrate tex4ht to give it an update. Help greatly appreciated here.

This package also sees the (third) revision of how config files for pdftex and luatex are structured, since then we have settled down. Hopefully this will close some of the issues that have appeared.

New packages

biblatex-ijsra, biblatex-nottsclassic, binarytree, diffcoeff, ecgdraw, fvextra, gitfile-info, graphics-def, ijsra, mgltex, milog, navydocs, nodetree, oldstandardt1, pdflatexpicscale, randomlist, texosquery

Updated packages

2up, acmart, acro, amsmath, animate, apa6, arabluatex, archaeologie, autobreak, beebe, biblatex-abnt, biblatex-gost, biblatex-ieee, biblatex-mla, biblatex-source-division, biblatex-trad, binarytree, bxjscls, changes, cloze, covington, cs, csplain, csquotes, csvsimple, datatool, datetime2, disser, dvipdfmx, dvips, emisa, epstopdf, esami, etex-pkg, factura, fancytabs, forest, genealogytree, ghsystem, glyphlist, gost, graphics, hyperref, hyperxmp, imakeidx, jadetex, japanese-otf, kpathsea, latex, lstbayes, luatexja, mandi, mcf2graph, mfirstuc, minted, oldstandard, optidef, parnotes, philosophersimprint, platex, protex, pst-pdf, ptex, pythontex, readarray, reledmac, sepfootnotes, sf298, skmath, skrapport, stackengine, sttools, tcolorbox, tetex, texinfo, texlive-docindex, texlive-es, texlive-scripts, thesis-ekf, tools, toptesi, tudscr, turabian-formatting, updmap-map, uplatex, uptex, velthuis, xassoccnt, ycbook.

Enjoy.

Dirk Eddelbuettel: RcppStreams 0.1.1

6 August, 2016 - 09:54

A maintenance release of RcppStreams is now on CRAN. RcppStreams brings the excellent Streamulus C++ template library for event stream processing to R.

Streamulus, written by Irit Katriel, uses very clever template meta-programming (via Boost Fusion) to implement an embedded domain-specific event language created specifically for event stream processing.

This release updates the compilation standard to C++11 per CRAN's request as this helps with both current and upcoming compiler variants. A few edits were made to DESCRIPTION and README.md, the Travis driver file was updated, but no new code was added.

The NEWS file entries follows below:

Changes in version 0.1.1 (2016-08-05)
  • Compilation is now done using C++11 standards per request of CRAN to help with an array of newer (pre-release) and older compilers

  • The Travis CI script was updated to use run.sh from our fork; it now also installs all dependencies as binary .deb files.

  • The README.md was updated with additional badges.

  • The DESCRIPTION file now has URL and BugReports entries.

Courtesy of CRANberries, there is also a copy of the DESCRIPTION file for this initial release. More detailed information is on the RcppStreams page page and of course on the Streamulus page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Petter Reinholdtsen: Sales number for the Free Culture translation, first half of 2016

6 August, 2016 - 03:45

As my regular readers probably remember, the last year I published a French and Norwegian translation of the classic Free Culture book by the founder of the Creative Commons movement, Lawrence Lessig. A bit less known is the fact that due to the way I created the translations, using docbook and po4a, I also recreated the English original. And because I already had created a new the PDF edition, I published it too. The revenue from the books are sent to the Creative Commons Corporation. In other words, I do not earn any money from this project, I just earn the warm fuzzy feeling that the text is available for a wider audience and more people can learn why the Creative Commons is needed.

Today, just for fun, I had a look at the sales number over at Lulu.com, which take care of payment, printing and shipping. Much to my surprise, the English edition is selling better than both the French and Norwegian edition, despite the fact that it has been available in English since it was first published. In total, 24 paper books was sold for USD $19.99 between 2016-01-01 and 2016-07-31:

Title / languageQuantity Culture Libre / French3 Fri kultur / Norwegian7 Free Culture / English14

The books are available both from Lulu.com and from large book stores like Amazon and Barnes&Noble. Most revenue, around $10 per book, is sent to the Creative Commons project when the book is sold directly by Lulu.com. The other channels give less revenue. The summary from Lulu tell me 10 books was sold via the Amazon channel, 10 via Ingram (what is this?) and 4 directly by Lulu. And Lulu.com tells me that the revenue sent so far this year is USD $101.42. No idea what kind of sales numbers to expect, so I do not know if that is a good amount of sales for a 10 year old book or not. But it make me happy that the buyers find the book, and I hope they enjoy reading it as much as I did.

The ebook edition is available for free from Github.

If you would like to translate and publish the book in your native language, I would be happy to help make it happen. Please get in touch.

Arturo Borrero González: Spawning a new blog with jekyllrb

5 August, 2016 - 12:00

I have been delighted with git for several years now. It's a very powerful tool and I use it every day.
I try to use git in all possible tasks: bind servers, configurations, firewalls, and other personal stuff.

However, there has been always a thing in my git-TODO: a blog managed with git.

After a bit of searching, I found an interesting technology: jekyllrb hosted at github pages. Jekyll looked easy to manage and easy to learn for a newbie like me.
There are some very good looking blogs out there using this combination, for example: https://rsms.me/

But I was lazy to migrate this 'ral-arturo' blog from blogger to jekyll, so I decided to create a new blog from scratch.

This time, the new blog in written in Spanish and is about adventures, nature, travels and outdoor sports.
Perhaps you noticed this article about the Mulhacen mountain (BTW, we did it! :-))
The new blog is called alfabravo.org,

I like the workflow with git & jekyll & github:

  • clone the repository
  • write a new post in markdown
  • read it and correct it locally with 'jekyll serve'
  • commit and push to github
  • done!

Who knows, perhaps this 'ral-arturo' blog ends being migrated to the new system as well.

Steve Kemp: Using the compiler to help you debug segfaults

5 August, 2016 - 09:15

Recently somebody reported that my console-based mail-client was segfaulting when opening an IMAP folder, and then when they tried with a local Maildir-hierarchy the same fault was observed.

I couldn't reproduce the problem at all, as neither my development host (read "my personal desktop"), nor my mail-host had been crashing at all, both being in use to read my email for several months.

Debugging crashes with no backtrace, or real hint of where to start, is a challenge. Even when downloading the same Maildir samples I couldn't see a problem. It was only when I decided to see if I could add some more diagnostics to my code that I came across a solution.

My intention was to make it easier to receive a backtrace, by adding more compiler options:

  -fsanitize=address -fno-omit-frame-pointer

I added those options and my mail-client immediately started to segfault on my own machine(s), almost as soon as it started. Ultimately I found three pieces of code where I was allocating C++ objects and passing them to the Lua stack, a pretty fundamental part of the code, which were buggy. Once I'd tracked down the areas of code that were broken and fixed them the user was happy, and I was happy too.

Its interesting that I've been running for over a year with these bogus things in place, which "just happened" to not crash for me or anybody else. In the future I'll be adding these options to more of my C-based projects, as there seems to be virtually no downside.

In related news my console editor has now achieved almost everything I want it to, having gained:

  • Syntax highlighting via Lua + LPEG
  • Support for TAB completion of Lua-code and filenames.
  • Bookmark support.
  • Support for setting the mark and copying/cutting regions.

The only outstanding feature, which is a biggy, is support for Undo which I need to add.

Happily no segfaults here, so far..

Joey Hess: keysafe

5 August, 2016 - 07:24

Have you ever thought about using a gpg key to encrypt something, but didn't due to worries that you'd eventually lose the secret key? Or maybe you did use a gpg key to encrypt something and lost the key. There are nice tools like paperkey to back up gpg keys, but they require things like printers, and a secure place to store the backups.

I feel that simple backup and restore of gpg keys (and encryption keys generally) is keeping some users from using gpg. If there was a nice automated solution for that, distributions could come preconfigured to generate encryption keys and use them for backups etc. I know this is a missing peice in the git-annex assistant, which makes it easy to generate a gpg key to encrypt your data, but can't help you back up the secret key.

So, I'm thinking about storing secret keys in the cloud. Which seems scary to me, since when I was a Debian Developer, my gpg key could have been used to compromise millions of systems. But this is not about developers, it's about users, and so trading off some security for some ease of use may be appropriate. Especially since the alternative is no security. I know that some folks back up their gpg keys in the cloud using DropBox.. We can do better.

I've thought up a design for this, called keysafe. The synopsis of how it works is:

The secret key is split into three shards, and each is uploaded to a server run by a different entity. Any two of the shards are sufficient to recover the original key. So any one server can go down and you can still recover the key.

A password is used to encrypt the key. For the servers to access your key, two of them need to collude together, and they then have to brute force the password. The design of keysafe makes brute forcing extra difficult by making it hard to know which shards belong to you.

Indeed the more people that use keysafe, the harder it becomes to brute-force anyone's key!

I could really use some additional reviews and feedback on the design by experts.

This project is being sponsored by Purism and by my Patreon supporters. By the way, I'm 15% of the way to my Patreon goal after one day!

Phil Hands: EOMA68: > $60k pledged on crowdsupply.com

5 August, 2016 - 05:04

crowdsupply.com has a campaign to fund production of EOMA68 computer cards (and associated peripherals) which recently passed the $60,000 mark.

If you were at DebConf13 in Switzerland, you may have seen me with some early prototypes that I had been lent to show people.

The concept: build computers on a PCMCIA physical form-factor, thus confining most of the hardware and software complexity in a single replaceable item, decoupling the design of the outer device from the chips that drive it.

There is a lot more information about this at crowdsupply, and at http://rhombus-tech.net/ -- I hope people find it interesting enough to sign up.

BTW While I host Rhombus Tech's website as a favour to Luke Leighton, I have no financial links with them.

Phil Hands: EOMA68: > $60k pledged on crowdsupply.com

5 August, 2016 - 05:04

crowdsupply.com has a campaign to fund production of EOMA68 computer cards (and associated peripherals) which recently passed the $60,000 mark.

If you were at DebConf13 in Switzerland, you may have seen me with some early prototypes that I had been lent to show people.

The inside of the A20 EOMA68 computer board

The concept: build computers on a PCMCIA physical form-factor, thus confining most of the hardware and software complexity in a single replaceable item, decoupling the design of the outer device from the chips that drive it.

EOMA68 pack-shot

There is a lot more information about this at crowdsupply, and at http://rhombus-tech.net/ -- I hope people find it interesting enough to sign up.

BTW While I host Rhombus Tech's website as a favour to Luke Leighton, I have no financial links with them.

Daniel Kahn Gillmor: Changes for GnuPG in Debian

4 August, 2016 - 04:55

The GNU Privacy Guard (GnuPG) upstream team maintains three branches of development: 1.4 ("classic"), 2.0 ("stable"), and 2.1 ("modern").

They differ in various ways: software architecture, supported algorithms, network transport mechanisms, protocol versions, development activity, co-installability, etc.

Debian currently ships two versions of GnuPG in every maintained suite -- in particular, /usr/bin/gpg has historically always been provided by the "classic" branch.

That's going to change!

Debian unstable will soon be moving to the "modern" branch for providing /usr/bin/gpg. This will give several advantages for Debian and its users in the future, but it will require a transition. Hopefully we can make it a smooth one.

What are the benefits?

Compared to "classic", The "modern" branch has:

  • updated crypto (including elliptic curves)
  • componentized architecture (e.g. libraries, some daemonized processes)
  • improved key storage
  • better network access (including talking to keyservers over tor)
  • stronger defaults
  • more active upstream development
  • safer info representation (e.g. no more key IDs, fingerprints easier to copy-and-paste)

If you want to try this out, the changes are already made in experimental. Please experiment!

What does this mean for end users?

If you're an end user and you don't use GnuPG directly, you shouldn't notice much of a change once the packages start to move through the rest of the archive.

Even if you do use GnuPG regularly, you shouldn't notice too much of a difference. One of the main differences is that all access to your secret key will be handled through gpg-agent, which should be automatically launched as needed. This means that operations like signing and decryption will cause gpg-agent to prompt the the user to unlock any locked keys directly, rather than gpg itself prompting the user.

If you have an existing keyring, you may also notice a difference based on a change of how your public keys are managed, though again this transition should ideally be smooth enough that you won't notice unless you care to investigate more deeply.

If you use GnuPG regularly, you might want to read the NEWS file that ships with GnuPG and related packages for updates that should help you through the transition.

If you use GnuPG in a language other than English, please install the gnupg-l10n package, which contains the localization/translation files. For versions where those files are split out of the main package, gnupg explicitly Recommends: gnupg-l10n already, so it should be brought in for new installations by default.

If you have an archive of old data that depends on known-broken algorithms, PGP3 keys, or other deprecated material, you'll need to have "classic" GnuPG around to access it. That will be provided in the gnupg1 package

What does this mean for package maintainers?

If you maintain a package that depends on gnupg: be aware that the gnupg package in debian is going through this transition.

A few general thoughts:

  • If your package Depends: gnupg for signature verification only, you might prefer to have it Depends: gpgv instead. gpgv is a much simpler tool that the full-blown GnuPG suite, and should be easier to manage. I'm happy to help with such a transition (we've made it recently with apt already)

  • If your package Depends: gnupg and expects ~/.gnupg/ to be laid out in a certain way, that's almost certainly going to break at some point. ~/.gnupg/ is GnuPG's internal storage, and it's not recommended to rely on any specific data structures there, as they may change. gpg offers commands like --export, --import, and --delete for manipulating its persistent storage. please use them instead!

  • If your package depends on parsing or displaying gpg's output for the user, please make sure you use its special machine-readable form (--with-colons). Parsing the human-readable text is not advised and may change from version to version.

If you maintain a package that depends on gnupg2 and tries to use gpg2 instead of gpg, that should stay ok. However, at some point it'd be nice to get rid of /usr/bin/gpg2 and just have one expected binary (gpg). So you can help with that:

  • Look for places where your package expects gpg2 and make it try gpg instead. If you can make your code fall back cleanly

  • Change your dependencies to indicate gnupg (&gt;= 2)

  • Patch lintian to encourage other people to make this switch ;)

What specifically needs to happen?

The last major step for this transition was renaming the source package for "classic" GnuPG to be gnupg1. This transition is currently in the ftp-master's NEW queue. Once it makes it through that queue, and both gnupg1 and gnupg2 have been in experimental for a few days without reports of dangerous breakage, we'll upload both gnupg1 and gnupg2 to unstable.

We'll also need to do some triage on the BTS, reassigning some reports which are really only relevant for the "classic" branch.

Please report bugs via the BTS as usual! You're also welcome to ask questions and make suggestions on #debian-gnupg on irc.oftc.net, or to mail the Debian GnuPG packaging team at pkg-gnupg-maint@lists.alioth.debian.org.

Happy hacking!

Bdale Garbee: ChaosKey

4 August, 2016 - 04:17

I'm pleased to announce that, at long last, the ChaosKey hardware random number generator described in talks at Debconf 14 in Portland and Debconf 16 in Cape Town is now available for purchase from Garbee and Garbee.

Keith Packard: chaoskey

4 August, 2016 - 04:16
ChaosKey v1.0 Released — USB Attached True Random Number Generator

ChaosKey, our random number generator that attaches via USB, is now available for sale from the altusmetrum store.

We talked about this device at Debconf 16 last month

Support for this device is included in Linux starting with version 4.1. Plug ChaosKey into your system and the driver will automatically add entropy into the kernel pool, providing a constant supply of true random numbers to help keep the system secure.

ChaosKey is free hardware running free software, built with free software on a free operating system.

Joey Hess: Patreon

4 August, 2016 - 02:08

I've been funded for two years by the DataLad project to work on git-annex. This has been a super excellent gig; they provided funding and feedback on ways git-annex could be improved, and I had a large amount of flexability to decide what to work on in git-annex. Also plenty of spare time to work on new projects like propellor, concurrent-output, and scroll. It was an awesome way to spend the last two years of my twenty years of free software.

That funding is running out. I'd like to continue this great streak of working on the free software projects that are important to me. I'd normally dip into my savings at this point and keep on going until some other source of funding turned up. But, my savings are about to be obliterated, since I'm buying the place where I've had so much success working distraction-free.

So, I've started a Patreon page to fund my ongoing work. Please check it out and contribute if you want to.

Some details about projects I want to work on this fall:

Elena 'valhalla' Grandi: The Cat Model of Package Ownership

4 August, 2016 - 01:02
The Cat Model of Package Ownership

Debian has been moving away from strong ownership of packages by package maintainers and towards encouraging group maintainership, for very good reasons: single maintainers have a bad bus factor and a number of other disadvantages.

When single maintainership is changed into maintainership by a small¹, open group of people who can easily communicate and sync with each other, everything is just better: there is an easy way to gradually replace people who want to leave, but there is also no duplication of efforts (because communication is easy), there are means to always have somebody available for emergency work and generally package quality can only gain from it.

Unfortunately, having such group of maintainers for every package would require more people than are available and willing to work on it, and while I think it's worth doing efforts to have big and important packages managed that way, it may not be so for the myriad of small ones that make up the long tail of a distribution.

Many of those packages may end up being maintained in a big team such as the language-based ones, which is probably better than remaining with a single maintainer, but can lead to some problems.

My experience with the old OpenEmbedded, back when it was still using monotone instead of git² and everybody was maintaining everything, however, leads me to think that this model has a big danger of turning into nobody maintains anything, because when something needs to be done everybody is thinking that somebody else will do it.

As a way to prevent that, I have been thinking in the general direction of a Cat Model of Package Ownership, which may or may not be a way to prevent some risks of both personal maintainership and big teams.

The basic idea is that the “my” in “my packages” is not the “my” in “my toys”, but the “my” in “my Cat, to whom I am a servant”.

As in the case of a cat, if my package needs a visit to the vet, it's my duty to do so. Other people may point me to the need of such a visit, e.g. by telling me that they have seen the cat leaving unhealty stools, that there is a bug in the package, or even that upstream released a new version a week ago, did you notice?, but the actual putting the package in a cat carrier and bringing it to the vet falls on me.

Whether you're allowed to play with or pet the cat is her decision, not mine, and giving her food or doing changes to the package is usually fine, but please ask first: a few cats have medical issues that require a special diet.

And like cats, sometimes the cat may decide that I'm not doing a good enough job of serving her, and move away to another maintainer; just remember that there is a difference between a lost cat who wants to go back to her old home and a cat that is looking for a new one. When in doubt, packages usually wear a collar with contact informations, trying to ping those is probably a good idea.

This is mostly a summer afternoon idea and will probably require some refinement, but I think that the basic idea can have some value. Comments are appreciated on the federated social networks where this post is being published, via email (valid addresses are on my website http://www.trueelena.org/computers/articles/the_cat_model_of_package_ownership.html and on my GPG key http://www.trueelena.org/about/gpg.html) or with a post on a blog that appears on planet debian http://planet.debian.org/.

¹ how small is small depends a lot on the size of the package, the amount of work it requires, how easy it is to parallelize it and how good are the people involved at communicating, so it would be quite hard to put a precise number here.

² I've moved away from it because the boards I was using could run plain Debian, but I've heard that after the move to git there have been a number of workflow changes (of which I've seen the start) and everything now works much better.

Guido Günther: Debian Fun in July 2016

3 August, 2016 - 14:02
Debian LTS

July marked the fifteenth month I contributed to Debian LTS under the Freexian umbrella. As usual I spent the 8 hours working on these LTS things:

  • Updated QEMU and QEMU-KVM packages to fix CVE-2016-5403, CVE-2016-4439, CVE-2016-4020, CVE-2016-2857 and CVE-2015-5239 resulting in DLA-573-1 and DLA-574-1
  • Updated icedove to 45.2.0 fixing CVE-2016-2818 resulting in DLA-574-1
  • Reviewed and uploaded xen 4.1.6.lts1-1. The update itself was prepared by Bastian Blank.
  • The little bit of remaining time I spent on further work the ruby-active{model,record}-3.2 and ruby-actionpack-3.2 (aka rails) CVEs. Although I have fixes for most of the CVEs already there are still some left where I'm not yet clear if the packages are affected.
  • Added some trivial autopkgtest for qemu-img (#832982) (on non LTS time)
Other Debian stuff
  • Fixed CVE-2016-5008 by uploading libvirt 2.0.0 to sid and 1.2.9-9+deb8u3 to stable-p-u
  • Uploaded libvirt 2.1.0~rc1 to experimental
  • Uploaded libvirt-python 2.0.0 to sid
  • Uploaded libosinfo 0.3.1 to sid preparing for the upcoming upstream package split
  • Uploaded virt-manager 1.4.0 to sid
  • Uploaded network-manager-iodine 1.2.0 to sid
  • Uploaded cups-pk-helper 0.2.6 to sid
  • Triaged apparmor related bugs in libvirt most notably the one affecting hotplugging of disks (#805002) which turned out to be rooted in the kernel not reloading profiles properly.
  • Uploaded git-buildpackage 0.8.0, 0.8.1 to experimental adding additional tarball support to gbp import-orig among other things

Dirk Eddelbuettel: digest 0.6.10

3 August, 2016 - 10:13

A new release, now at version number 0.6.10, of the digest package is now on CRAN. I also just prepared the Debian upload.

This release, just like the previous one, is once again the work mostly of external contributors. Michel Lang added length checks to sha1(); Thierry Onkelinx extended sha1() support and added more tests, Viliam Simko also extended sha1() to more types, and Henrik Bengtsson improved intervals and fixed a bug with file usage.

CRANberries provides the usual summary of changes to the previous version.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

John Goerzen: All Aboard

3 August, 2016 - 08:13

“Aaaaaall Aboard!” *chug* *chug*

And so began a “trip” aboard our hotel train in Indianapolis, conducted by our very own Jacob and Oliver.

Because, well, what could be more fun than spending a few days in the world’s only real Pullman sleeping car, on its original service track, inside a hotel?

We were on a family vacation to Indianapolis, staying in what two railfan boys were sure to enjoy: a hotel actually built into part of the historic Indianapolis Union Station complex. This is the original train track and trainshed. They moved in the Pullman cars, then built the hotel around them. Jacob and Oliver played for hours, acting as conductors and engineers, sending their “train” all across the country to pick up and drop off passengers.

Opa!

Have you ever seen a kid’s face when you introduce them to something totally new, and they think it is really exciting, but a little scary too?

That was Jacob and Oliver when I introduced them to saganaki (flaming cheese) at a Greek restaurant. The conversation went a little like this:

“Our waitress will bring out some cheese. And she will set it ON FIRE — right by our table!”

“Will it burn the ceiling?”

“No, she’ll be careful.”

“Will it be a HUGE fire?”

“About a medium-sized fire.”

“Then what will happen?”

“She’ll yell ‘OPA!’ and we’ll eat the cheese after the fire goes out.”

“Does it taste good?”

“Oh yes. My favorite!”

It turned out several tables had ordered saganaki that evening, so whenever I saw it coming out, I’d direct their attention to it. Jacob decided that everyone should call it “opa” instead of saganaki because that’s what the waitstaff always said. Pretty soon whenever they’d see something appear in the window from the kitchen, there’d be craning necks and excited jabbering of “maybe that’s our opa!”

And when it finally WAS our “opa”, there were laughs of delight and I suspect they thought that was the best cheese ever.

Giggling Elevators

Fun times were had pressing noses against the glass around the elevator. Laura and I sat on a nearby sofa while Jacob and Oliver sat by the elevators, anxiously waiting for someone to need to go up and down. They point and wave at elevators coming down, and when elevator passengers waved back, Oliver would burst out giggling and run over to Laura and me with excitement.

Some history

We got to see the grand hall of Indianapolis Union Station — what a treat to be able to set foot in this magnificent, historic space, the world’s oldest union station. We even got to see the office where Thomas Edison worked, and as a hotel employee explained, was fired for doing too many experiments on the job.

Water and walkways

Indy has a system of elevated walkways spanning quite a section of downtown. It can be rather complex navigating them, and after our first day there, I offered to let Jacob and Oliver be the leaders. Boy did they take pride in that! They stopped to carefully study maps and signs, and proudly announced “this way” or “turn here” – and were usually correct.

And it was the same in the paddleboat we took down the canal. Both boys wanted to be in charge of steering, and we only scared a few other paddleboaters.

Fireworks

Our visit ended with the grand fireworks show downtown, set off from atop a skyscraper. I had been scouting for places to watch from, and figured that a bridge-walkway would be great. A couple other families had that thought too, and we all watched the 20-minute show in the drizzle.

Loving brothers

By far my favorite photo from the week is this one, of Jacob and Oliver asleep, snuggled up next to each other under the covers. They sure are loving and caring brothers, and had a great time playing together.

Olivier Grégoire: Tenth : SmartInfo is alive!

2 August, 2016 - 23:57

Hi,
This week, I worked on the gnome client. I wanted to link my right click menu with the call view. That’s was difficult because the right click menu is called by a signal. So, I didn’t have access to the instance of the menu to send my own signal. I tried a lot of things to implement my signal, but it’s just impossible. Then I discovered the GAction. With this technique, I just needed to change the state of my action and connect it with my method in my view and it’s done!

I had now a complete working solution that I tested to find improvements and bugs.
-Change the 2 buttons who launch and stop the system by one unique button.
-Resolve the bug who crash ring when I tried to use SmartInfo in a new call.
-Some other small bugs

See you next week!

Daemon
LRC
Client

Dirk Eddelbuettel: RcppGetconf 0.0.2

2 August, 2016 - 08:28

A first update for the recent RcppGetconf package for reading system configuration --- not unlike getconf from the libc library --- is now out. Almost immediately after I tweeted / blogged asking for help with OS X builds, fellow Rcpp hacker Qiang Kou created a clever pull request allowing for exactly that. So now we cover two POSIX systems that matter most these days --- Linux and OS X --- but as there are more out there, please do try, test and send those pull requests.

We also added a new function getConfig() retrieving a single (given) value complementing the earlier catch-all function getAll(). You can find out more about RcppGetconf from the local RcppGetconf page and the GitHub repo.

Changes in this release are as follows:

Changes in inline version 0.0.2 (2016-08-01)
  • A new function getConfig for single values was added.

  • The struct vars is now defined more portable allowing compilation on OS X (PR #1 by Qiang Kou).

Courtesy of CRANberries, there is a diffstat report. More about the package is at the local RcppGetconf page and the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้