Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 36 min ago

Petter Reinholdtsen: Coz can help you find bottlenecks in multi-threaded software - nice free software

11 August, 2016 - 17:00

This summer, I read a great article "coz: This Is the Profiler You're Looking For" in USENIX ;login: about how to profile multi-threaded programs. It presented a system for profiling software by running experiences in the running program, testing how run time performance is affected by "speeding up" parts of the code to various degrees compared to a normal run. It does this by slowing down parallel threads while the "faster up" code is running and measure how this affect processing time. The processing time is measured using probes inserted into the code, either using progress counters (COZ_PROGRESS) or as latency meters (COZ_BEGIN/COZ_END). It can also measure unmodified code by measuring complete the program runtime and running the program several times instead.

The project and presentation was so inspiring that I would like to get the system into Debian. I created a WNPP request for it and contacted upstream to try to make the system ready for Debian by sending patches. The build process need to be changed a bit to avoid running 'git clone' to get dependencies, and to include the JavaScript web page used to visualize the collected profiling information included in the source package. But I expect that should work out fairly soon.

The way the system work is fairly simple. To run an coz experiment on a binary with debug symbols available, start the program like this:

coz run --- program-to-run

This will create a text file profile.coz with the instrumentation information. To show what part of the code affect the performance most, use a web browser and either point it to http://plasma-umass.github.io/coz/ or use the copy from git (in the gh-pages branch). Check out this web site to have a look at several example profiling runs and get an idea what the end result from the profile runs look like. To make the profiling more useful you include <coz.h> and insert the COZ_PROGRESS or COZ_BEGIN and COZ_END at appropriate places in the code, rebuild and run the profiler. This allow coz to do more targeted experiments.

A video published by ACM presenting the Coz profiler is available from Youtube. There is also a paper from the 25th Symposium on Operating Systems Principles available titled Coz: finding code that counts with causal profiling.

The source code for Coz is available from github. It will only build with clang because it uses a C++ feature missing in GCC, but I've submitted a patch to solve it and hope it will be included in the upstream source soon.

Please get in touch if you, like me, would like to see this piece of software in Debian. I would very much like some help with the packaging effort, as I lack the in depth knowledge on how to package C++ libraries.

Tom Marble: webica

11 August, 2016 - 02:54
webica

I've just pushed the first version of my new Clojure wrapper for Selenium called webica.

The reason I need webica is that I want to do automated browser testing for ClojureScript based web applications. Certainly NodeJS, PhantomJS, Nashorn and the like are useful... but these can't quite emulate the full browser experience. We want to test our ClojureScript web apps in browsers -- ideally via our favorite automated continuous integration tools.




My new approach with the webica library is to do full Java introspection in the spirit that amazonica does for the AWS API. In fact I wanted to take it a step further by actually generating Clojure source code via introspection that can be used by Codox to generate nice API docs (which you don't get with amazonica). That, alas, was a little trickier than expected due to pesky Quine-like problems .

If you load the library on the REPL you can get a feeling for each namespace by calling the show-functions function.

I realize this approach of aggressive introspection, playing fast and loose with types and application level dynamic dispatch are crazy antipatterns. In my defense I started out playing around to see "if I could do it". After seeing the result in the form of a shell script in Clojure -- imitating lmgtfy -- perhaps webica will actually be useful!

I plan to talk about webica tonight at clojure.mn -- hope to see you there!

Julian Andres Klode: Porting APT to CMake

10 August, 2016 - 22:52

Ever since it’s creation back in the dark ages, APT shipped with it’s own build system consisting of autoconf and a bunch of makefiles. In 2009, I felt like replacing that with something more standard, and because nobody really liked autotools, decided to go with CMake. Well, the bazaar branch was never really merged back in 2009.

Fast forward 7 years to 2016. A few months ago, we noticed that our build system had trouble with correct dependencies in parallel building. So, in search for a way out, I picked up my CMake branch from 2009 last Thursday and spent the whole weekend working on it, and today I am happy to announce that I merged it into master:

123 files changed, 1674 insertions(+), 3205 deletions(-)

More than 1500 lines less build system code. Quite impressive, eh? This also includes about 200 lines of less code in debian/, as that switched from prehistoric debhelper stuff to modern dh (compat level 9, almost ready for 10).

The annoying Tale of Targets vs Files

Talking about CMake: I don’t really love it. As you might know, CMake differentiates between targets and files. Targets can in some cases depend on files (generated by a command in the same directory), but overall files are not really targets. You also cannot have a target with the same name as a file you are generating in a custom command, you have to rename your target (make is OK with the generated stuff, but ninja complains about cycles because your custom target and your custom command have the same name).

Byproducts for the (time) win

One interesting thing about CMake and Ninja are byproducts. In our tree, we are building C++ files. We also have .pot templates depending on them, and .mo files depending on the templates (we have multiple domains, and merge the per-domain .pot with the all-domain .po file during the build to get a per-domain .mo). Now, if we just let them depend naively, changing a C++ file causes the .pot file to be regenerated which in turns causes us to build .mo files for every freaking language in the package. Even if nothing changed.

Byproducts solve this problem. Instead of just building the .pot file, we also create a stamp file (AKA the witness) and write the .pot file (without a header) into a temporary name and only copy it to its final name if the content changed. The .pot file is declared as a byproduct of the command.

The command doing the .pot->.mo step still depends on the .pot file (the byproduct), but as that only changes now if strings change, the .mo files only get rebuild if I change a translatable string. We still need to ensure that that the .pot file is actually built before we try to use it – the solution here is to specify a custom target depending on the witness and then have the target containing the .mo build commands depend on that target.

Now if you use  make, you might now this trick already. In make, the byproducts remain undeclared, though, while in CMake we can now actually express them, and they are used by the Ninja generator and the Ninja build tool if you chose that over make (try it out, it’s fast).

Further Work

Some command names are hardcoded, I should find_program() them. Also cross-building the package does not yet work successfully, but it only requires a tiny amount of patches in debhelper and/or cmake.

I also tried building the package on a Fedora docker image (with dpkg installed, it’s available in the Fedora sources). While I could eventually get the programs build and most of the integration test suite to pass, there are some minor issues to fix, mostly in the documentation building and GTest department: Fedora ships its docbook stylesheets in a different location, and ships GTest as a pre-compiled library, and not a source tree.

I have not yet tested building on exotic platforms like macOS, or even a BSD. Please do and report back. In Debian, CMake is not up-to.date enough on the non-Linux platforms to build APT due to test suite failures, I hope those can be fixed/disabled soon (it appears to be a timing issue AFAICT).

I hope that we eventually get some non-Debian backends for APT. I’d love that.


Filed under: Debian, Uncategorized

Norbert Preining: Suki Kim – Without You, There Is No Us

10 August, 2016 - 14:11

A book that goes further behind the walls that surround North Korea than anything else I have seen. Suki Kim managed to squeeze herself, American-Korean, into a English teaching job at the Pyongyang University of Science and Technology, and reports her experiences during two visits there.

Most of us in the connected world are well aware about the incredibly backwardness of North Korea, and the harsh living conditions despite the praise that is bombarded onto us through the official channels. But reading about the incredibly underdeveloped students at PUST, Pyongyang University of Science and Technology, the elite of the country, who never heard about the most basic techniques, is still surprising.

Time there seemed to pass differently. When you are shut off from the world, every day is exactly the same as the one before. This sameness has a way of wearing down your soul until you become nothing but a breathing, toiling, consuming thing that awakes to the sun and sleeps at the dawning of the dark.

Another very disturbing part of this book are the short but intensive looks into the country side, when excursions or shopping trips were scheduled. They lay open a barren land, with Gulag like working conditions and permanent shortage of proper food.

I have been aware about the situation in North Korea, but reading about it from a very special perspective gave me the shivers.

John Goerzen: Easily Improving Linux Security with Two-Factor Authentication

10 August, 2016 - 05:23

2-Factor Authentication (2FA) is a simple way to help improve the security of your systems. It restricts the scope of damage if a machine is compromised. If, for instance, you have a security token or authenticator app on your phone that is required for ssh to a remote machine, then even if every laptop you use to connect to the remote is totally owned, an attacker cannot establish a new ssh session on their own.

There are a lot of tutorials out there on the Internet that get you about halfway there, so here is some more detail.

Background

In this article, I will be focusing on authentication in the style of Google Authenticator, which is a special case of OATH HOTP or TOTP. You can use the Google Authenticator app, FreeOTP, or a hardware token like Yubikey to generate tokens with this. They are all 100% compatible with Google Authenticator and libpam-google-authenticator.

The basic idea is that there is a pre-shared secret key. At each login, a different and unique token is required, which is generated based on the pre-shared secret key and some other information. With TOTP, the “other information” is the current time, implying that both machines must be reasably well in-sync time-wise. With HOTP, the “other information” is a count of the number of times the pre-shared key has been used. Both typically have a “window” on the server side that can let times within a certain number of seconds, or a certain number of login accesses, work.

The beauty of this system is that after the initial setup, no Internet access is required on either end to validate the key (though TOTP requires both ends to be reasonably in sync time-wise).

The basics: user account setup and ssh authentication

You can start with the basics by reading one of these articles: one, two, three. Debian/Ubuntu users will find both the pam module and the user account setup binary in libpam-google-authenticator.

For many, you can stop there. You’re done. But if you want to kick it up a notch, read on:

Enhancement 1: Requiring 2FA even when ssh public key auth is used

Let’s consider a scenario in which your system is completely compromised. Unless your ssh keys are also stored in something like a Yubikey Neo, they could wind up being compromised as well – if someone can read your files and sniff your keyboard, your ssh private keys are at risk.

So we can configure ssh and PAM so that a OTP token is required even for this scenario.

First off, in /etc/ssh/sshd_config, we want to change or add these lines:

UsePAM yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey,keyboard-interactive

This forces all authentication to pass two verification methods in ssh: publickey and keyboard-interactive. All users will have to supply a public key and then also pass keyboard-interactive auth. Normally keyboard-interactive auth prompts for a password, but we can change /etc/pam.d/sshd on this. I added this line at the very top of /etc/pam.d/sshd:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

This basically makes Google Authenticator both necessary and sufficient for keyboard-interactive in ssh. That is, whenever the system wants to use keyboard-interactive, rather than prompt for a password, it instead prompts for a token. Note that any user that has not set up google-authenticator already will be completely unable to ssh into their account.

Enhancement 1, variant 2: Allowing automated processes to root

On many of my systems, I have ~root/.ssh/authorized_keys set up to permit certain systems to run locked-down commands for things like backups. These are automated commands, and the above configuration will break them because I’m not going to be typing in codes at 3AM.

If you are very restrictive about what you put in root’s authorized_keys, you can exempt the root user from the 2FA requirement in ssh by adding this to sshd_config:

Match User root
  AuthenticationMethods publickey

This says that the only way to access the root account via ssh is to use the authorized_keys file, and no 2FA will be required in this scenario.

Enhancement 1, variant 2: Allowing non-pubkey auth

On some multiuser systems, some users may still want to use password auth rather than publickey auth. There are a few ways we can support that:

  1. Users without public keys will have to supply a OTP and a password, while users with public keys will have to supply public key, OTP, and a password
  2. Users without public keys will have to supply OTP or a password, while users with public keys will have to supply public key, OTP, or a password
  3. Users without public keys will have to supply OTP and a password, while users with public keys only need to supply the public key

The third option is covered in any number of third-party tutorials. To enable options 1 or 2, you’ll need to put this in sshd_config:

AuthenticationMethods publickey,keyboard-interactive keyboard-interactive

This means that to authenticate, you need to pass either publickey and then keyboard-interactive auth, or just keyboard-interactive auth.

Then in /etc/pam.d/sshd, you put this:

auth required pam_google_authenticator.so

As a sub-variant for option 1, you can add nullok to here to permit auth from people that do not have a Google Authenticator configuration.

Or for option 2, change “required” to “sufficient”. You should not add nullok in combination with sufficient, because that could let people without a Google Authenticator config authenticate completely without a password at all.

Enhancement 2: Configuring su

A lot of other tutorials stop with ssh (and maybe gdm) but forget about the other ways we authenticate or change users on a system. su and sudo are the two most important ones. If your root password is compromised, you don’t want anybody to be able to su to that account without having to supply a token. So you can set up google-authenticator for root.

Then, edit /etc/pam.d/su and insert this line after the pam_rootok.so line:

auth       required     pam_google_authenticator.so nullok

The reason you put this after pam_rootok.so is because you want to be able to su from root to any account without having to input a token. We add nullok to the end of this, because you may want to su to accounts that don’t have tokens. Just make sure to configure tokens for the root account first.

Enhancement 3: Configuring sudo

This one is similar to su, but a little different. This lets you, say, secure the root password for sudo.

Normally, you might sudo from your user account to root (if so configured). You might have sudo configured to require you to enter in your own password (rather than root’s), or to just permit you to do whatever you want as root without a password.

Our first step, as always, is to configure PAM. What we do here depends on your desired behavior: do you want to require someone to supply both a password and a token, or just a token, or require a token? If you want to require a token, put this at the top of /etc/pam.d/sudo:

auth [success=done new_authtok_reqd=done ignore=ignore default=bad] pam_google_authenticator.so

If you want to require a token and a password, change the bracketed string to “required”, and if you want a token or a password, change it to “sufficient”. As before, if you want to permit people without a configured token to proceed, add “nullok”, but do not use that with “sufficient” or the bracketed example here.

Now here comes the fun part. By default, if a user is required to supply a password to sudo, they are required to supply their own password. That does not help us here, because a user logged in to the system can read the ~/.google_authenticator file and easily then supply tokens for themselves. What you want to do is require them to supply root’s password. Here’s how I set that up in sudoers:

Defaults:jgoerzen rootpw
jgoerzen ALL=(ALL) ALL

So now, with the combination of this and the PAM configuration above, I can sudo to the root user without knowing its password — but only if I can supply root’s token. Pretty slick, eh?

Further reading

In addition to the basic tutorials referenced above, consider:

Reproducible builds folks: Finishing the final variations

10 August, 2016 - 03:17

Author: ceridwen

I've been working on getting the last of the variations working. With no responses on the mailing list from anyone outside Debian and with limited time remaining, with Lunar I've decided to deemphasize it.

  1. Build path is done.

  2. Host and domain name use domainname and hostname. This site is old, but it indicates that domainname was available on most OSes and hostname was available everywhere as of 2004. Prebuilder uses a Linux-specific utility (unshare --uts) to run this variation in a chroot, but I'm not doing this for reprotest: if you want this variation, use qemu.

  3. User/group will not be portable, because they'll rely on useradd/groupadd and su. useradd and groupadd work on many but not all OSes, notably not including FreeBSD or MacOS X. su was universal in 2004.

  4. Time is not done but will probably be portable to some systems, because it will rely on date -s. Unfortunately, I haven't been able to find any information on how common date -s is across Unix-like OSes, as the -s option is not part of the POSIX standard.

  5. At the moment, I have no idea how to implement changes for /bin/sh and the login shell that will even work across different distributions of Linux, much less different OSes. There are a couple of key problems, starting with the need to find two different shells to use, because there's no way to find out what shells are installed. This blog post explains why /etc/shells doesn't work well for finding what shells are available: not everything in /etc/shells is necessarily a shell (Ubuntu has /usr/bin/screen) and not all available shells are in /etc/shells. Also, there's no good way to find out what shell is the system default because /bin/sh can be an arbitrary binary, not a symlink,and no good way to identify what it is if it is a binary. I can hard-code shell choices, but which shells? bash is obvious for Linux, but what's the best second choice?

On other topics:

  1. reprotest fails to build properly on jessie: Lunar said, and I agree, that fixing this is not a priority. I need someone with more knowledge of Debian Python packaging to help me. If I'm going to support old versions, I also need some kind of CI server, because I don't have the time or ability to maintain old versions of Debian and Python myself.

  2. libc-bin: ldconfig segfaults when run using "setarch uname26": I don't have a good solution for this, but I don't want to hard-code and maintain an architecture-specific disable. Would changing the argument to setarch work around the segfault? Is there a way to test for the presence of the bug that won't cause a segfault or similar crash?

  3. Please put adt-virt-* binaries back onto PATH: reprotest is not affected by this change because I forked the autopkgtest code rather than depending on it, so that reprotest can be installed through PyPi. At the moment, reprotest doesn't make its versions of the programs in virt/ available on $PATH. This is primarily because of the problems with distributing command-line scripts with setuptools. The approach I'm currently using, including the virt/ programs as non-code data with include reprotest/virt/* in MANIFEST.in, doesn't install them to make them available for other programs. Using one of the other approaches potentially could, but it's difficult to make the imports work with those approaches. (I haven't found a way to do it.) I think the best solution to this approach is to split autopkgtest into the Debian-specific components and the general-purpose virtualization components, but I don't have the time to do this myself or to negotiate with Martin Pitt, if he'd even be receptive. I'm also unsure at this point if it wouldn't be better for reprotest to switch from autopkgtest to using something like Ansible to run the virtualization, because Ansible has some solved some of the portability problems already and is not tied to Debian.

My goal is to finish the variations (finally), though as this has always proved more difficult than I expected in the past, I don't make any guarantees. Beyond that, I want to start working on finishing docstrings in the reprotest-specific (i.e., not inherited from autopkgtest) code, improving the documentation in general, and improving the tests.

David Moreno: 0x20

10 August, 2016 - 02:11

So I turned 0x20.
🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺🍺

Reproducible builds folks: Reproducible builds: week 67 in Stretch cycle

9 August, 2016 - 19:56

What happened in the Reproducible Builds effort between Sunday July 31 and Saturday August 6 2016:

Toolchain development and fixes
  • dpkg/1.18.10 by Guillem Jover.
    • Generate reproducible source tarballs by using the new GNU tar --clamp-mtime option
    • Enable fixdebugpath build flag feature by default, original patch by Mattia Rizzolo.
  • cython/0.24.1-1 by Yaroslav Halchenko.
  • Chris Lamb and Thomas Schmidt worked on some patches to make reproducible ISO images.
  • Johannes Schauer continued the discussion on #763822 regarding dak and buildinfo files.
  • Johannes Schauer continued the discussion on #774415 regarding srebuild and debrebuild.
Packages fixed and bugs filed

The following 24 packages have become reproducible - in our current test setup - due to changes in their build-dependencies: alglib aspcud boomaga fcl flute haskell-hopenpgp indigo italc kst ktexteditor libgroove libjson-rpc-cpp libqes luminance-hdr openscenegraph palabos petri-foo pgagent sisl srm-ifce vera++ visp x42-plugins zbackup

The following packages have become reproducible after being fixed:

The following newly-uploaded packages appear to be reproducible now, for reasons we were not able to figure out. (Relevant changelogs did not mention reproducible builds.)

  • libitext-java/2.1.7-1 by Emmanuel Bourg.
  • lice/1:4.2.5i-2 by Kurt Roeckx.
  • pgbackrest/1.04-1 by Adrian Vondendriesch.
  • pxlib/0.6.7-1 by Uwe Steinmann.
  • runit/2.1.2-5 by Dmitry Bogatov.
  • ssvnc/1.0.29-3 by Magnus Holmgren.
  • syncthing/0.14.3+dfsg1-3 by Alexandre Viau.
  • tachyon/0.99~b6+dsx-5 by Jerome Benoit.
  • tor/0.2.8.6-2 by Peter Palfrader.

Some uploads have addressed some reproducibility issues, but not all of them:

Patches submitted that have not made their way to the archive yet:

Package reviews and QA

These are reviews of reproduciblity issues of Debian packages.

276 package reviews have been added, 172 have been updated and 44 have been removed in this week.

7 FTBFS bugs have been reported by Chris Lamb.

Reproducibility tools
  • diffoscope/56~bpo8+1 uploaded to jessie-backports by Mattia Rizzolo
  • strip-nondeterminism/0.022-1~bpo8+1 uploaded to jessie-backports by Mattia Rizzolo
Test infrastructure

For testing the impact of allowing variations of the buildpath (which up until now we required to be identical for reproducible rebuilds), Reiner Herrmann contribed a patch which enabled build path variations on testing/i386. This is possible now since dpkg 1.18.10 enables the --fixdebugpath build flag feature by default, which should result in reproducible builds (for C code) even with varying paths. So far we haven't had many results due to disturbances in our build network in the last days, but it seems this would mean roughly between 5-15% additional unreproducible packages - compared to what we see now. We'll keep you updated on the numbers (and problems with compilers and common frameworks) as we find them.

lynxis continued work to test LEDE and OpenWrt on two different hosts, to include date variation in the tests.

Mattia and Holger worked on the (mass) deployment scripts, so that the - for space reasons - only jenkins.debian.net GIT clone resides in ~jenkins-adm/ and not anymore in Holger's homedir, so that soon Mattia (and possibly others!) will be able to fully maintain this setup, while Holger is doing siesta.

Miscellaneous

Chris, dkg, h01ger and Ximin attended a Core Infrastricture Initiative summit meeting in New York City, to discuss and promote this Reproducible Builds project. The CII was set up in the wake of the Heartbleed SSL vulnerability to support software projects that are critical to the functioning of the internet.

This week's edition was written by Ximin Luo and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Thorsten Alteholz: My Debian Activities in July 2016

9 August, 2016 - 16:16

FTP assistant

This month I marked 248 packages for accept and rejected 60. I also sent 13 emails to maintainers asking questions. Again, this was a rather quiet month without much trouble.

Debian LTS

This was my twenty-fifth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

As the number of participants increases, this month my all in all workload has been only 14.70h. Strangely enough, most of the time I choosed packages, where at the end the vulnerable code of the corresponding CVE was not present in the Wheezy version. So I could mark several CVEs for bind, libgd2 and mupfd as not-affected, without doing an upload.

Nevertheless I also did two uploads to fix another two CVEs:

  • [DLA 563-1] libgd2 security update
  • [DLA 569-1] xmlrpc-epi security update

As there arrived some new CVEs for PHP5 I didn’t do an upload this month. But don’t purge your testing environments, a new version is comming soon :-).

This month I also had another term of frontdesk work.

Other stuff

For the Alljoyn framework I took care of RC-bug #829148.

I also uploaded a new version of rplay to fix #805959.

In the Javascript world I could close #831006

Shirish Agarwal: Doha and the Supreme Court of DFSG Free

9 August, 2016 - 16:16

Hi,

I am in two minds of what to write about Doha. My job has been vastly simplified by a friend when he shared with me https://www.youtube.com/watch?v=LdrAd-44LW0 . That video is more relevant and more closer to the truth than whatever I can share. As can be seen it is funny but more sad the way Qatarians are trying to figure out how things will be and as can be seen it seems to heading towards a ‘real estate bubble’ . They would have to let go of the Sharia if they are thinking of wealthy westerners coming to stay put. I am just sad to know that many of my country-men are stuck there and although I hope the best for them, I dread it may turn out the way it has turned out for many people of Indians, and especially from Kerala in Saudi Arabia. I would touch about the Kerala situation probably in another blog post as this time is exclusively for legal aspects which were discussed in Debconf.

A bit of backgrounder here, one part of my family is lawyers which means I have somewhat notion of law as practiced in our land. As probably everybody knows, India was ruled by the British for around 150 odd years. One of the things that they gave while leaving was/is the IPC (Indian Penal Code) and is practiced with the common law concept. The concept means precedence of any judgement goes quite some way in framing rulings and law of the land as time goes on besides the lobbying and the politics which happens in any democracy.

Free software would not have been there without the GPL – The General Public License. And the license is as much a legal document as it’s something that the developers can work without becoming deranged, as it is one of the more simpler licenses to work with.

My own understanding of the legal, ethical and moral issues around me were framed by two-three different TV shows, books (fiction and non-fiction alike) apart from what little news I heard in family. One was ‘M*A*S*H* (with Alan Alda and his frailness, anarchism, humanism, civil rights), the ‘Practise’ and ‘Boston Legal’ which does lay bare the many grey areas that lawyers have to deal with (‘The Practice’ also influenced a lot of civil rights understanding and First amendment, but as it is a TV show, how much of it is actually practiced for lawyers and how much moral dilemma they are can only be guessed at.) . In books it is artists like John Grisham, Michael Connelly as well as Perry Mason – Agatha Christie. In non-fiction look at the treasures under bombayhighcourt e-books corner and series of Hamlyn Lectures. I would have to warn that all of the above are major time-sinks but rewarding in their own way. Also haven’t read all of them as time and interests are constrained but do know they are good for understanding bit of our history. I do crave for a meetup kind of scenario when non-lawyers can read and discuss about facets of law .

All that understanding was vastly amplified by Groklaw.net which made non-lawyers at the very least be able to decipher and understand what is going on in the free software world. After PJ (Pamela Jones) closed it in 2013 due to total surveillance by the Free World (i.e. the United States of America, NSA) we have been thirsty. We do get occasionally somewhat mildly interesting articles in lwn.net or arstechnica.net but nowhere the sheer brilliance of groklaw.

So, it was a sheer stroke of luck that I met Mr. Bradley M. Kuhn who works with Karen Sandler on Software Conservancy. While I wanted to be there for his presentation, it was just one of those days which doesn’t go as planned. However, as we met socially and over e-mail there were two basic questions which I asked him which also imbibes why we need to fight for software freedom in the court of law. Below is a re-wording of what he shared .

Q1. why do people think that GPL still needs to be challenged in the court of law while there are gpl-violations which has been more or less successfully defended in the court of law ?

Bradley Kuhn – the GPL violations is basically a violation of one or more clauses of the GPL license and not the GPL license as a whole and my effort during my lifetime would be to make/have such precedents that the GPL is held as a valid license in the court of law.

Q2. Let’s say IF GPL is held to be valid in the court of law, would FSF benefit monetarily, at least to my mind it might be so, as more people and comapnies could be convinced to use strong copyleft licenses such as GPLv3 or AGPLv3 .

Bradley Kuhn – It may or may not. It is possible that even after winning, that people and especially companies may go for weak copyleft licenses if it suits them. The only benefit would probably would be to those people who are already using GPLv3 as the law could be used to protect them as well. Although we would want and welcome companies who would use strong copyleft license such as the GPL, the future is ‘in future’ and hence uncertain. Both possibilities co-exist.

While Bradley didn’t say it, I would add further here it probably would mean also moving from being a more offensive mode (which GPL-violations is based upon where a violation occurs and somebody either from the victim’s side or a by-stander notices the violation, brings it to the notice of the victim and the GPL-volations team.) to perhaps it being defended by the DMCA people themselves, once GPL is held as a valid license in the eyes of law. Although should you use the DMCA or not is a matter of choice, personal belief system as well as your legal recourses.

I have to share that the FSF and the GPL-violations team are probably very discerning when they take up the fight as most of the work done by them is pro-bono (i.e. they don’t make a single penny/paisa from the work done therein.) and hence in view of scarce resources, it makes sense to go only for the biggest violators in the hopes that you can either make them agree to compensate and agree to the terms of license of any software/hardware combination or sue them and take a bigger share of the reward/compensation awarded by the Court to help the defendant and maybe some of the proceeds donated by the defendant and people like you and me to make sure that Conservancy and the GPL-violations team is still around to help the next time something similar happens.

Bradley Kuhn presenting at #Debconf 16

Now, as far as his presentation is concerned, whose video can be seen at http://meetings-archive.debian.net/pub/debian-meetings/2016/debconf16/The_Supreme_Court_of_DFSGFree.webm , I thought it was tame. While he talked about ‘gaming the system’ in some sense, he was sharing that the system debian-legal works (most-of-the-time). The list actually works because many far more brilliant people than me take time to understand the intricacies of various licenses and how they should be interpreted through the excellently written Debian Free Software Guidelines and whether the license under discussion contravenes the DFSG or is part of it. I do agree with his point though that the ftp-master/s and the team may not be the right person to judge the license in adherence to the DFSG, or her/is not giving a reason for rejecting a package to not entering into the package archive.

I actually asked the same question on debian-legal and while I had guessed, it seems there is enough review of the licenses per-se as answer from Paul Wise shows. Charles Pessley also shared an idea he has documented which probably didn’t get much traction as involves more ‘work’ on DD’s without any benefit to show for it. All in all I hope it sheds some light on why there is need to be more aware of law in software freedom. Two Organizations which work on software freedom from legal standpoint are SFLC  (Delhi) headed by the charming Mr. Eben Moglen  and ALF (Bangalore). I do hope more people, especially developers take a bit more interest in some of the resources mentioned above.


Filed under: Miscellenous Tagged: #Alternative Law Forum, #bombayhighcourt e-library, #Common Law, #Debconf16, #Fiction, #Hewlyn lectures, #India, #Jurispudence, #legal fiction, #real estate bubble, #SFLC.in, #Software Freedom, #timesink, Doha, Law

Junichi Uekawa: The icfpc contest.

9 August, 2016 - 04:19
The icfpc contest. I didn't get the joke about map and fold until it was complete.

Michael Stapelberg: Debian Code Search: improving client-side latency

8 August, 2016 - 14:45

A while ago, it occurred to me that querying Debian Code Search seemed slow, which surprised me because I previously spent quite some effort on making it faster, see Debian Code Search Instant and Taming the latency tail for the most recent substantial architecture overhaul and related optimizations.

Upon taking a closer look, I realized that while performing the search query on the server side was pretty fast, the perceived slowness was due to the client side being slow. By “being slow”, I mean that it took a long time until something was drawn on the screen (high latency) and that what was happening on the screen was janky (stuttering, not smooth).

Part of that slowness was due to historical reasons: the client-side architecture was optimized for the use-case where users open Debian Code Search’s index page and then submit a search query, but I was using Chrome’s address bar to send a search query (type “codesearch”, then hit the TAB key). Further, we only added a non-JavaScript version after we launched the JavaScript version. Hence, the redirects and progressive enhancements we implemented are more of a kludge than a well thought out design.

After this bit of original investigation, I opened GitHub issue #69 to track the work on making Debian Code Search faster. In that issue, I captured how Chrome’s network inspector visualizes the work necessary to render the page:

A couple of quick wins

There are a couple of little fixes and improvements on which I’m not going to spend too much time on, but which I list for completeness anyway just in case they come in handy for a similar project of yours:

Bigger changes

The URL pattern has changed. Previously, we had 2 areas of the website, one for JavaScript-compatible clients and one for the rest. When you hit the wrong one, you were redirected. In some areas, we couldn’t tell which area is the correct one for you, so you would always incur a redirect: one example for this is the search bar. With the new URL pattern, we deliver both versions under the same URL: the elements only used by the JavaScript code are hidden using CSS by default, then made visible by JavaScript code. The elements only used by the non-JavaScript code are wrapped in a <noscript> tag.

All CSS which is required for the initial page rendering is now inlined in the responses, allowing the browser to immediately render a response without requiring any additional round trips.

All non-essential CSS has been moved into a separate CSS file which is loaded asynchronously. This is done using a pattern like <link rel="preload" href="foo.css" as="style" onload="this.rel='stylesheet'">, see also filamentgroup/loadCSS.

We switched from WebSockets to the EventSource API because the former is not compatible with HTTP/2, whereas the latter is. This removes a round trip and some custom code for WebSocket reconnecting, because EventSource does that for you.

The progress bar animation used to animate the background-position property. It turns out that browsers can only animate the position, scale, rotation and opacity properties smoothly, because such animations can be off-loaded to the GPU. Hence, we have re-implemented the progress bar animation using the position property.

The biggest win for improving client-side latency from the Chrome address bar was introducing Service Workers (see commit 7f31aef402cb782056e290a797f224171f4af270). Our Service Worker caches static assets and a placeholder results page. The placeholder page is presented immediately when you start a search (e.g. from the address bar), making the first response immediate, i.e. rendered within 100ms. Having assets and the result page out of the way, the first round trip is used for actually doing the search, removing all unnecessary overhead.

With all of these improvements in place, rendering latency goes down from half a second to well under 100 ms, and this is what the Chrome network inspector looks like:

Paul Tagliamonte: Using PKCS#11 on GNU/Linux

8 August, 2016 - 07:17

PKCS#11 is a standard API to interface with HSMs, Smart Cards, or other types of random hardware backed crypto. On my travel laptop, I use a few Yubikeys in PKCS#11 mode using OpenSC to handle system login. libpam-pkcs11 is a pretty easy to use module that will let you log into your system locally using a PKCS#11 token locally.

One of the least documented things, though, was how to use an OpenSC PKCS#11 token in Chrome. First, close all web browsers you have open.

sudo apt-get install libnss3-tools

certutil -U -d sql:$HOME/.pki/nssdb
modutil -add "OpenSC" -libfile /usr/lib/x86_64-linux-gnu/opensc-pkcs11.so -dbdir sql:$HOME/.pki/nssdb
modutil -list "OpenSC" -dbdir sql:$HOME/.pki/nssdb 
modutil -enable "OpenSC" -dbdir sql:$HOME/.pki/nssdb

Now, we'll have the PKCS#11 module ready for nss to use, so let's double check that the tokens are registered:

certutil -U -d sql:$HOME/.pki/nssdb
certutil -L -h "OpenSC" -d sql:$HOME/.pki/nssdb

If this winds up causing issues, you can remove it using the following command:

modutil -delete "OpenSC" -dbdir sql:$HOME/.pki/nssdb

Dirk Eddelbuettel: drat 0.1.1: Updates schmupdates!

8 August, 2016 - 04:39

One year ago (tomorrow) drat 0.1.0 was released. It held up rather well, but a number of small fixes and enhancements piled up, along with somewhat-finished to still-raw additions to the examples/ sections. With that, we are happy to announce drat release 0.1.1 which arrived on CRAN earlier today.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories is what we use. In other words, friends don't let friends use install_github(). Just kidding. Maybe. Or not.

This version 0.1.1 builds on the previous release from one year ago. Several users sent in nicely focused pull request, and I added a bit of spit and polish here and there.

The NEWS file (added belatedly in this release) summarises the release as follows:

Changes in drat version 0.1.1 (2016-08-07)
  • Changes in drat functionality

    • Use dir.exists, leading to versioned Depends on R (>= 3.2.0)

    • Optionally pull remote before insert (Mark in PR #38)

    • Fix support for dots (Jan G. in PR #40)

    • Accept dots in package names (Antonio in PR #48)

    • Switch to htpps URLs at GitHub (Colin in PR #50)

    • Support additional fields in PACKAGE file (Jan G. in PR #54)

  • Changes in drat documentation

    • Further improvements and clarifications to vignettes

    • Travis script switched to run.sh from our fork

    • This NEWS file was (belatedly) added

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Martin-&#201;ric Racine: Debian within a Windows partition?

8 August, 2016 - 00:18
A few years ago, I remember that Ubuntu had a trick that allowed the distribution to be installed as one large file within a Windows partition. Does the same thing exist to install Debian?

Dirk Eddelbuettel: littler 0.3.1

7 August, 2016 - 20:45

The second release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in the summer of 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. It is still faster, and in my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It prefers to live on Linux and Unix, has its difficulties on the OS X due yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

This release brings us fixes and enhancements from three other contributors, a couple new example scripts, more robust builds, extended documentation and more -- see below for details.

The NEWS file entry is below.

Changes in littler version 0.3.1 (2016-08-06)
  • Changes in examples

    • install2.r now passes on extra options past -- to R CMD INSTALL (PR #37 by Steven Pav)

    • Added rcc.r to run rcmdcheck::rcmdcheck()

    • Added (still simple) render.r to render (R)markdown

    • Several examples now support the -x or --usage flag to show extended help.

  • Changes in build system

    • The AM_LDFLAGS variable is now set and used too (PR #38 by Mattias Ellert)

    • Three more directories, used when an explicit installation directory is set, are excluded (also #38 by Mattias)

    • Travis CI is now driven via run.sh from our fork, and deploys all packages as .deb binaries using our PPA where needed

  • Changes in package

    • SystemRequirements now mentions the need for libR, i.e. an R built with a shared library so that we can embed R.

    • The docopt and rcmdcheck packages are now suggested, and added to the Travis installation.

    • A new helper function r() is now provided and exported so that the package can be imported (closes #40).

    • URL and BugReports links were added to DESCRIPTION.

  • Changes in documentation

    • The help output for installGithub.r was corrected (PR #39 by Brandon Bertelsen)

Full details for the littler release are provided as usual at the ChangeLog page.

The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. will probably have new

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sean Whitton: git-push-all

7 August, 2016 - 05:45

I maintain Debian packages for several projects which are hosted on GitHub. I have a master packaging branch containing both upstream’s code, and my debian/ subdirectory containing the packaging control files. When upstream makes a new release, I simply merge their release tag into master: git merge 1.2.3 (after reviewing the diff!).

Packaging things for Debian turns out to be a great way to find small bugs that need to be fixed, and I end up forwarding a lot of patches upstream. Since the projects are on GitHub, that means forking the repo and submitting pull requests. So I end up with three remotes:

origin
the Debian git server
upstream
upstream’s GitHub repo from which I’m getting the release tags
fork
my GitHub fork of upstream’s repo, where I’m pushing bugfix branches

I can easily push individual branches to particular remotes. For example, I might say git push -u fork fix-gcc-6. However, it is also useful to have a command that pushes everything to the places it should be: pushes bugfix branches to fork, my master packaging branch to origin, and definitely doesn’t try to push anything to upstream (recently an upstream project gave me push access because I was sending so many patches, and then got a bit annoyed when I pushed a series of Debian release tags to their GitHub repo by mistake).

I spent quite a lot of time reading git-config(1) and git-push(1), and came to the conclusion that there is no combination of git settings and a push command that do the right thing in all cases. Candidates, and why they’re insufficient:

git push --all
I thought about using this with the remote.pushDefault and branch.*.pushRemote configuration options. The problem is that git push --all pushes to only one remote, and it selects it by looking at the current branch. If I ran this command for all remotes, it would push everything everywhere.
git push <remote> : for each remote
This is the “matching push strategy”. It will push all branches that already exist on the remote with the same name. So I thought about running this for each remote. The problem is that I typically have different master branchs on different remotes. The fork and upstream remotes have upstream’s master branch, and the origin remote has my packaging branch.

I wrote a perl script implementing git push-all, which does the right thing. As you will see from the description at the top of the script, it uses remote.pushDefault and branch.*.pushRemote to determine where it should push, falling back to pushing to the remote the branch is tracking. If won’t push something when all three of these are unspecified, and more generally, it won’t create new remote branches except in the case where the branch-specific setting branch.*.pushRemote has been specified. Magit makes it easy to set remote.pushDefault and branch.*.pushRemote.

I have this in my ~/.mrconfig:

git_push = git push-all

so that I can just run mr push to ensure that all of my work has been sent where it needs to be (see myrepos).

#!/usr/bin/perl

# git-push-all -- intelligently push most branches

# Copyright (C) 2016 Sean Whitton
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or (at
# your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# Prerequisites:

# The Git::Wrapper, Config::GitLike, and List::MoreUtils perl
# libraries.  On a Debian system,
#     apt-get install libgit-wrapper-perl libconfig-gitlike-perl \
#         liblist-moreutils-perl

# Description:

# This script will try to push all your branches to the places they
# should be pushed, with --follow-tags.  Specifically, for each branch,
#
# 1. If branch.pushRemote is set, push it there
#
# 2. Otherwise, if remote.pushDefault is set, push it there
#
# 3. Otherwise, if it is tracking a remote branch, push it there
#
# 4. Otherwise, exit non-zero.
#
# If a branch is tracking a remote that you cannot push to, be sure to
# set at least one of branch.pushRemote and remote.pushDefault.

use strict;
use warnings;
no warnings "experimental::smartmatch";

use Git::Wrapper;
use Config::GitLike;
use List::MoreUtils qw{ uniq apply };

my $git = Git::Wrapper->new(".");
my $config = Config::GitLike->new( confname => 'config' );
$config->load_file('.git/config');

my @branches = apply { s/[ \*]//g } $git->branch;
my @allBranches = apply { s/[ \*]//g } $git->branch({ all => 1 });
my $pushDefault = $config->get( key => "remote.pushDefault" );

my %pushes;

foreach my $branch ( @branches ) {
    my $pushRemote = $config->get( key => "branch.$branch.pushRemote" );
    my $tracking = $config->get( key => "branch.$branch.remote" );

    if ( defined $pushRemote ) {
        print "I: pushing $branch to $pushRemote (its pushRemote)\n";
        push @{ $pushes{$pushRemote} }, $branch;
    # don't push unless it already exists on the remote: this script
    # avoids creating branches
    } elsif ( defined $pushDefault
              && "remotes/$pushDefault/$branch" ~~ @allBranches ) {
        print "I: pushing $branch to $pushDefault (the remote.pushDefault)\n";
        push @{ $pushes{$pushDefault} }, $branch;
    } elsif ( !defined $pushDefault && defined $tracking ) {
        print "I: pushing $branch to $tracking (probably to its tracking branch)\n";
        push @{ $pushes{$tracking} }, $branch;
    } else {
        die "E: couldn't find anywhere to push $branch";
    }
}

foreach my $remote ( keys %pushes ) {
    my @branches = @{ $pushes{$remote} };
    system "git push --follow-tags $remote @branches";
    exit 1 if ( $? != 0 );
}

Mirco Bauer: Ethereum GPU Mining on Linux How-To

7 August, 2016 - 05:35
TL;DR

Install/use Debian 8 or Ubuntu 16.0.4 then execute:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update
sudo apt-get install ethereum ethminer
geth account new
# copy long character sequence within {}, that is your <YOUR_WALLET_ADDRESS>
# if you lose the passphrase, you lose your coins!
sudo apt-get install linux-headers-amd64 build-essential
chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run
ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200
echo done
My Attention Span is > 60 seconds

Ethereum is a crypto currency similar to Bitcoin as it is based on the blockchain technology. Ethereum is not yet another Bitcoin clone though, since it has an additional feature called Smart Contracts that makes it unique and very promising. I am not going into details how Ethereum works, you can get that into great detail on the Internet. This post is about Ethereum mining. Mining is how crypto coins are created. You need to spent computing time to get coins out. At the beginning CPU mining was sufficient, but as the Ethereum network difficulty has increased you need to use GPUs as they can calculate at a much higher hashrate than a general purpose CPU can do.

About 2 months ago I bought a new gaming rig, with a Nvidia GTX 1070 so I can experience virtual-reality gaming with a HTC Vive at a great framerate. As it turns out modern graphics cards are very good at hashing so I gave it a spin.

Initially I did this mining setup with Windows 10, as that is the operating system on my gaming rig. If you want to do Ethereum mining using your GPU, then you really want to use Linux. On Windows the GTX 1070 produced a hashrate of 6 MH/s (megahashes per second) while the same hardware does 25 MH/s on Linux. The hashrate multiplied by 4 by using Linux instead of Windows. Sounds good? Keep reading and follow this guide.

You have to pick a Linux distro to use for mining. As I am a Debian developer, all my systems run Debian, which is what I am also using for this guide. The same procedure can be done for Ubuntu as it is similar enough. For other distros you have to substitute the steps yourself. So I assume you already have Debian 8 or Ubuntu 16.04 installed on your system.

Install Ethereum Software

First we need the geth tool which is the main Ethereum "client". Ethereum is really a peer-to-peer network, that means each node is a server and client at the same time. A node that contains the complete blockchain history in a database is called a full node. For this guide you don't need to run a full node, as mining pools do this for you. We still need geth to create the private key of your Ethereum wallet. Somewhere we have to receive the coins we are mining

Add the Ethereum APT repository using these commands:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ethereum/ethereum
sudo apt-get update

On Debian 8 (on Ubuntu you can skip this) you need to replace the repository name with this command:

sudo sed 's/jessie/vivid/' -i /etc/apt/sources.list.d/ethereum-ethereum-*.list
sudo apt-get update

Install ethereum, ethminer and geth:

sudo apt-get install ethereum ethminer geth
Create Ethereum Wallet

A wallet is where coins are "stored". They are not really stored in the wallet because the wallet is just a private key that nobody has. The balance of that wallet is visible to everyone using the blockchain database. And this is what full nodes do, they contain and distribute the database to all other peers. So this this command to create your first private key for your wallet:

geth account new

Be aware, that this passphrase protects the private key of your wallet. Anyone who has access to that file and knows your passphrase will have full control over your coins. And also do not forget the passphrase, as if you do, you lost all your coins!

The output of "geth account new" shows a long character/number sequence quoted in {}. This is your wallet address and you should write that number down, as if someone wants to send you money, then it is to that address. We will use that for the mining pool later.

Install (proprietary) nvidia driver

For OpenCL to work with nvidia graphics cards, like my GTX 1070, you need to install this proprietary driver from nvidia. If you have an older card maybe the opensource drivers will work for you. For the nvidia pascal cards numbers 10xx you will need this driver package.

After you have agreed the terms, download the NVIDIA-Linux-x86_64-367.35.run file. But before we can use that installer we need to install some dependencies that installer needs as it will have to compile a Linux kernel module for you. Install the dependencies using this command:

sudo apt-get install linux-headers-amd64 build-essential

Now we can make the installer executable and run it like this:

chmod +x NVIDIA-Linux-x86_64-367.35.run
sudo NVIDIA-Linux-x86_64-367.35.run

If that step completed without error, then we should be able to run the mining benchmark!

ethminer -M -G

The -M means "run benchmark" and the -G is for GPU mining. The first time you run it it will create a DAG file and that will takes a while. For me it took about 12 minutes on my GTX 1070. After that is should show a inner mean hashrate. If it says H/s that is hashes per second and KH is kilo (H/1000) and MH is megahashes per second (KH/1000). I had numbers around 25-30 MH/s, but for real mining you will see an average that is a balanced number and not a min/max range.

Pick Ethereum Network

Now it gets serious, you need to decide 2 things. First which Ethereum network you want to mine for and the second is using which pool.

Ethereum has 2 networks, one is called Ethereum One or Core, while the other is called Ethereum Classic. Ethereum has made a hardfork to undo the consequences of a software bug in the DAO. The DAO is a smart contract for a decentralized organization. Because of that bug, a blackhat could use that bug to obtain money from that DAO. The Ethereum developers made a poll and decided that the consequences will be undone. Not everyone agreed and the old network stayed alive and is now called Ethereum Classic short ETC. The hardfork kept its short name ETH.

This is important to understand for mining, because the hashing difficulty has a huge difference between ETH and ETC. As of writing, the hashrate of ETC is at 20% compared to ETH. Thus you need less computing time to get ETC coins and more time to get ETH coins. Differently said, ETC mining is currently more profitable.

Pick a Pool

Hmmmm, I want a swimming pool, thanks! Just kidding... You can mine without a pool, that is called solo mining, but you will get less reward. A mining pool are multiple computers that work on the same block to find a solution quicker than others. The pool has an aggregated hashrate that is higher than other solo miners. Each found block by anyone in this pool will be rewarded to everyone in the pool. The reward of 5 ether currently per block gets split in the same ratio of hashrate each member provides (minus the pool fee). So while you get less for a found block, you still have a steady lower income rate instead of higher with less chance of finding one (in time). Simply said: you have to find a new block faster than the others to receive the reward.

If you want to mine Ethereum Classic (ETC) use one of the pools listed here (at the bottom of the page).

If you want to mine Ethereum One / Core (ETH) use one of the pools listed here.

Run ethminer

The instruction page of the pool website usually says how to start the miner program, but here is an example of the pool that I use (because pony!):

ethminer -G -F http://yolo.ethclassic.faith:9999/0x<YOUR_WALLET_ADDRESS> --farm-recheck 200
Profit

If this guide was helpful for you, you can tip me at ethereum:0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 or simply run the ethminer using my wallet address for a day or two:

ethminer -G -F http://yolo.ethclassic.faith:9999/0x9ec1220d2f2fadd3f0c96e3007daa827bc83fbd6 --farm-recheck 200

Happy mining!

Robert Edmonds: Cable modems: Arris SB6190 vs. Netgear CM600

7 August, 2016 - 04:52

Recently I activated new cable ISP service at my home and needed to purchase a cable modem. There were only a few candidate devices that supported at least 24 downstream channels (preferably 32), and did not contain an integrated router or access point.

The first modem I tried was the Arris SB6190, which supports 32 downstream channels. It is based on the Intel Puma 6 SoC, and looking at an older release of the SB6190 firmware source reveals that the device runs Linux. This device, running the latest 9.1.93N firmware, goes into a failure mode after several days of uptime which causes it to drop 1-2% of packets. Here is a SmokePing graph that measures latency to my ISP's recursive DNS server, showing the transition into the “degraded” mode:

It didn't drop packets at random, though. Some traffic would be deterministically dropped, such as the parallel A/AAAA DNS lookups generated by the glibc DNS stub resolver. For instance, in the following tcpdump output:

[1] 17:31:46.989073 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[2] 17:31:46.989102 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[3] 17:31:47.020423 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., […]
[4] 17:31:51.993680 IP [My IP].50775 > 75.75.75.75.53: 53571+ A? www.comcast6.net. (34)
[5] 17:31:52.025138 IP 75.75.75.75.53 > [My IP].50775: 53571 2/0/0 CNAME comcast6.g.comcast.net., […]
[6] 17:31:52.025282 IP [My IP].50775 > 75.75.75.75.53: 14987+ AAAA? www.comcast6.net. (34)
[7] 17:31:52.056550 IP 75.75.75.75.53 > [My IP].50775: 14987 2/0/0 CNAME comcast6.g.comcast.net., […]

Packets [1] and [2] are the A and AAAA queries being initiated in parallel. Note that they both use the same 4-tuple of (Source IP, Destination IP, Source Port, Destination Port), but with different DNS IDs. Packet [3] is the response to packet [1]. The response to packet [2] never arrives, and five seconds later, the glibc stub resolver times out and retries in single-request mode, which performs the A and AAAA queries sequentially. Packets [4] and [5] are the type A query and response, and packets [6] and [7] are the AAAA query and response.

The Arris SB6190 running firmware 9.1.93N would consistently interfere with these parallel DNS requests, but only when operating in its “degraded” mode. It also didn't matter whether glibc was configured to use an IPv4 or IPv6 nameserver, or which nameserver was being used. Power cycling the modem would fix the issue for a few days.

My ISP offered to downgrade the firmware on the Arris SB6190 to version 9.1.93K. This firmware version doesn't go into a degraded mode after a few days, but it does exhibit higher latency, and more jitter:

It seemed unlikely that Arris would fix the firmware issues in the SB6190 before the end of my 30-day return window, so I returned the SB6190 and purchased a Netgear CM600. This modem appears to be based on the Broadcom BCM3384 and looking at an older release of the CM600 firmware source reveals that the device runs the open source eCos embedded operating system.

The Netgear CM600 so far hasn't exhibited any of the issues I found with the Arris SB6190 modem. Here is a SmokePing graph for the CM600, which shows median latency about 1 ms lower than the Arris modem:

It's not clear which company is to blame for the problems in the Arris modem. Looking at the DOCSIS drivers in the SB6190 firmware source reveals copyright statements from ARRIS Group, Texas Instruments, and Intel. However, I would recommend avoiding cable modems produced by Arris in particular, and cable modems based on the Intel Puma SoC in general.

Norbert Preining: Debian/TeX Live 2016.20160805-1

7 August, 2016 - 04:31

TUG 2016 is over, and I have returned from a wonderful trip to Toronto and Maine. High time to release a new checkout of the TeX Live packages. After that I will probably need some time for another checkout, as there are a lot of plans on the table: upstream created a new collection, which means new package in Debian, which needs to go through NEW, and I am also planning to integrate tex4ht to give it an update. Help greatly appreciated here.

This package also sees the (third) revision of how config files for pdftex and luatex are structured, since then we have settled down. Hopefully this will close some of the issues that have appeared.

New packages

biblatex-ijsra, biblatex-nottsclassic, binarytree, diffcoeff, ecgdraw, fvextra, gitfile-info, graphics-def, ijsra, mgltex, milog, navydocs, nodetree, oldstandardt1, pdflatexpicscale, randomlist, texosquery

Updated packages

2up, acmart, acro, amsmath, animate, apa6, arabluatex, archaeologie, autobreak, beebe, biblatex-abnt, biblatex-gost, biblatex-ieee, biblatex-mla, biblatex-source-division, biblatex-trad, binarytree, bxjscls, changes, cloze, covington, cs, csplain, csquotes, csvsimple, datatool, datetime2, disser, dvipdfmx, dvips, emisa, epstopdf, esami, etex-pkg, factura, fancytabs, forest, genealogytree, ghsystem, glyphlist, gost, graphics, hyperref, hyperxmp, imakeidx, jadetex, japanese-otf, kpathsea, latex, lstbayes, luatexja, mandi, mcf2graph, mfirstuc, minted, oldstandard, optidef, parnotes, philosophersimprint, platex, protex, pst-pdf, ptex, pythontex, readarray, reledmac, sepfootnotes, sf298, skmath, skrapport, stackengine, sttools, tcolorbox, tetex, texinfo, texlive-docindex, texlive-es, texlive-scripts, thesis-ekf, tools, toptesi, tudscr, turabian-formatting, updmap-map, uplatex, uptex, velthuis, xassoccnt, ycbook.

Enjoy.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้