Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 46 min ago

Dirk Eddelbuettel: AsioHeaders 1.11.0-1

8 January, 2016 - 08:48

Making it easier to use C++ in R has been a really nice and rewarding side effect of the work on Rcpp. One really good idea came from something Jay Emerson, Michael Kane and had I kicked about for way too long: shipping Boost headers for easier use by R and CRAN packages (such as their family of packages around bigmemory). The key idea here is headers: Experienced C++ authors such as library writers can organise C++ code in such a way that one can (almost always) get by without any linking. Which makes deployment so much easier in most use cases, and surely also with R which knows how to set an include path.

So after years of "yes we really should", Jay deserves the credit for actually sitting down almost three years ago and creating the first version. So at the end of January 2013, we released BH version 1.51.0-0. By now the package is used fifty-four other CRAN packages --- clearly a much stronger uptake then we ever expected. I took over maintenance at some point, and we are currently in-line with the most recent Boost release 1.60.0.

But some people were always lusting after (some) parts of Boost which also require linking. For some libraries (such as Boost.Date_Time in my RcppBDT package) you can set a #define to not require linking (and forego some string formatting or parsing we get from R anyway). Some others are trickier; Boost Regex is one example. I do not think you can use it without linking (but then C++11 helps and finally brings regular expression in the default library).

Another library which the systems / network programmers at work would frequently rely upon is Boost.Asio, a cross-platform C++ library for network and low-level I/O programming. It was already used on CRAN by the iptools package by Bob Rudis and Oliver Keyes which shied away from building on Windoze (while 'doze and Boost do get along, but the CRAN system makes it a little more involved).

A couple of days ago, I somehow learned that the Asio library actually comes in two flavours: the well-known Boost.Asio (requiring linking), and also as a header-only standalone library! Well, well, well -- so I ran that by Bob and Oliver asking if that would be useful. And the response was a resounding and pretty much immediate Hell, Yes!.

So I wrapped it in a package, told Bob about it, who tested it and gave the thumbs up. And after a slightly longer-than-usual on-boarding, the package is now on CRAN. As network (and systems) programming is not entirely uncommon even at CRAN, I hope that this package may find a few more use cases. A new version of iptools should be forthcoming "shortly", including for the first time a Windows build thanks to this package. The indefatigable Martin Morgan already spotted our GitHub repo and scored the first fork.

Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Elena 'valhalla' Grandi: Smartphones, ownership and hope for the fate of humanity

8 January, 2016 - 04:21
Smartphones, ownership and hope for the fate of humanity
Do you own your phone or does it own you? | DanielPocock.com


I have conflicting opinions about this article.

Usually I carry a dumb phone, so I'm not completely disconnected, but I'm mostly self-limited to "useful" communications by the fact that I have to pay for calls and SMSs. It also has a few useful features like showing the time¹, an alarm clock and a led torch, but that's it.

I also carry a smartphone, but I've never been able to trust it with my personal data, so there is no email on it and no communication software. It's also always offline to preserve battery, unless I'm actually using it for something (usually maps). It does have an offline wikipedia reader, which is the second thing I use it more often for. About half of the time I try to use it, however, it is off because I forgot to charge it, unless I've planned in advance to use it (which usually means I'm also carrying a laptop and will need to tether it).

So I guess that I should be agreeing with the article that offline life is better, and that we shouldn't depend on phones in our daily life, and mostly I do.

On the other hand, I'm not so sure that all of the people who seem to be interacting with a phone are actually disconnected from the local reality.

More than once I've experienced the use of smartphones as part of a local interaction: one typical case involves people having a conversation IRL and checking some fact on the internet and then sharing the results with the rest of the local group.

Actually, most² of the time I've seen a smartphone being used at our table while eating with friends or collegues it was being passed around to show something to the people at the table, or at the very least being read aloud from, so it was part of the local experience, not a way to disconnect from it.

I'm sure that there are cases of abuse, but I still have hope that most of the connected humanity is managing to find a good balance between online and offline.

¹ I don't want to go back carrying a wrist watch. I remember them as something unconfortable that ended up hitting stuff as I moved my hands, and I'd rather have my wrists free while I type, thanks. Pocket watches, OTOH...

² the main exception involved one young adult in the middle of significantly older relatives, which is a somewhat different issue, and one that I believe predated smartphones (IIRC in my case similar situations involved trying to be somewhere else by reading a book).

Francois Marier: Streamzap remotes and evdev in MythTV

8 January, 2016 - 00:50

Modern versions of Linux and MythTV enable infrared remote controls without the need for lirc. Here's how I migrated my Streamzap remote to evdev.

Installing packages

In order to avoid conflicts between evdev and lirc, I started by removing lirc and its config:

apt purge lirc

and then I installed this tool:

apt install ir-keytable
Remapping keys

While my Streamzap remote works out of the box with kernel 3.16, the keycodes that it sends to Xorg are not the ones that MythTV expects.

I therefore copied the existing mapping:

cp /lib/udev/rc_keymaps/streamzap /home/mythtv/

and changed it to this:

0x28c0 KEY_0
0x28c1 KEY_1
0x28c2 KEY_2
0x28c3 KEY_3
0x28c4 KEY_4
0x28c5 KEY_5
0x28c6 KEY_6
0x28c7 KEY_7
0x28c8 KEY_8
0x28c9 KEY_9
0x28ca KEY_ESC
0x28cb KEY_MUTE # |
0x28cc KEY_UP
0x28cd KEY_RIGHTBRACE
0x28ce KEY_DOWN
0x28cf KEY_LEFTBRACE
0x28d0 KEY_UP
0x28d1 KEY_LEFT
0x28d2 KEY_ENTER
0x28d3 KEY_RIGHT
0x28d4 KEY_DOWN
0x28d5 KEY_M
0x28d6 KEY_ESC
0x28d7 KEY_L
0x28d8 KEY_P
0x28d9 KEY_ESC
0x28da KEY_BACK # <
0x28db KEY_FORWARD # >
0x28dc KEY_R
0x28dd KEY_PAGEUP
0x28de KEY_PAGEDOWN
0x28e0 KEY_D
0x28e1 KEY_I
0x28e2 KEY_END
0x28e3 KEY_A

The complete list of all EV_KEY keycodes can be found in the kernel.

The following command will write this mapping to the driver:

/usr/bin/ir-keytable w /home/mythtv/streamzap -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00

and they should take effect once MythTV is restarted.

Applying the mapping at boot

While the naïve solution is to apply the mapping at boot (for example, by sticking it in /etc/rc.local), that only works if the right modules are loaded before rc.local runs.

A much better solution is to write a udev rule so that the mapping is written after the driver is loaded.

I created /etc/udev/rules.d/streamzap.rules with the following:

# Configure remote control for MythTV
# https://www.mythtv.org/wiki/User_Manual:IR_control_via_evdev#Modify_key_codes
ACTION=="add", ATTRS{idVendor}=="0e9c", ATTRS{idProduct}=="0000", RUN+="/usr/bin/ir-keytable -c -w /home/mythtv/streamzap -D 1000 -P 250 -d /dev/input/by-id/usb-Streamzap__Inc._Streamzap_Remote_Control-event-if00"

and got the vendor and product IDs using:

grep '^[IN]:' /proc/bus/input/devices

The -D and -P parameters control what happens when a button on the remote is held down and the keypress must be repeated. These delays are in milliseconds.

Michal &#268;iha&#345;: Enca 1.18

8 January, 2016 - 00:00

It seems that I did mess it up with last version of Enca and it was not possible to install it without error. Now comes hotfix which fixes tat.

If you don't know Enca, it is an Extremely Naive Charset Analyser. It detects character set and encoding of text files and can also convert them to other encodings using either a built-in converter or external libraries and tools like libiconv, librecode, or cstocs.

Full list of changes for 1.18 release:

  • fix installation of devhelp documentation

Still enca is in maintenance mode only and I have no intentions to write new features. However there is no limitation to other contributors, join the project at GitHub :-).

You can download from http://cihar.com/software/enca/.

Filed under: Enca English | 0 comments

Sven Hoexter: Failing with F5: stderr, stdout - who cares

7 January, 2016 - 23:32
[root@adc:Standby:In Sync] config # tmsh save /sys ucs /var/tmp/foo.ucs
Saving active configuration...
/var/tmp/foo.ucs is saved.
[root@adc:Standby:In Sync] config # tmsh save /sys ucs /var/tmp/foo.ucs > /dev/null
Saving active configuration...
[root@adc:Standby:In Sync] config # tmsh save /sys ucs /var/tmp/foo.ucs 2> /dev/null
/var/tmp/foo.ucs is saved.
[root@adc:Standby:In Sync] config #

Seems F5 is not alone with such glorious ideas. A coworker pointed out that the "ipspace list" command on our old NetApps outputs a space and a backspace in some places.

Zlatan Todorić: Merry Christmas

7 January, 2016 - 22:09

Today it is Christmas here (Serbs are majority Orthodox and so is my family). While I am not religious, it has a great tradition here and probably the only day when majority of family gathers on one place.

I also today remember my Debian family and want to express love to it. I will try this year to dedicate much more time to Debian and hope I keep this pace going on for many years (lifetime?) to come.

Cheers to all and happy hacking.

Daniel Pocock: Do you own your phone or does it own you?

7 January, 2016 - 22:05

Have you started thinking about new year's resolutions for 2016? Back to the gym or giving up sugary drinks?

Many new year's resolutions have a health theme. Unless you have a heroin addiction, there may not be anything else in your life that is more addictive and has potentially more impact on your health and quality of life than your mobile phone. Almost every week there is some new report about the negative impact of phone use on rest or leisure time. Children are particularly at risk and evidence strongly suggests their grades at school are tanking as a consequence.

Can you imagine your life changing for the better if you switched off your mobile phone or left it at home for one day per week in 2016? If you have children, can you think of anything more powerful than the example you set yourself to help them stay in control of their phones? Children have a remarkable ability to emulate the bad habits they observe in their parents.

Are you in control?

Turning it off is a powerful act of showing who is in charge. If you feel you can't live without it, then you are putting your life in the hands of the people who expect an immediate answer of their calls, your phone company and the Silicon Valley executives who make all those apps you can't stop using.

As security expert Jacob Appelbaum puts it, cell phones are tracking devices that also happen to make phone calls. Isn't that a chilling thought to reflect on the next time you give one as Christmas gift?

For your health, your children and your bank balance

Not so long ago we were having lunch in a pizza restaurant in Luzern, a picturesque lakeside town at the base of the Swiss Alps. Luzern is a popular first stop for tourists from all around the world.

A Korean family came along and sat at the table next to us. After ordering their food, they all immediately took out their mobile devices and sat there in complete silence, the mother and father, a girl of eight and a boy of five, oblivious to the world around them and even each other, tapping and swiping for the next ten minutes until their food arrived.

We wanted to say hello to them, I joked that I should beep first, initiating communication with the sound of a text message notification.

Is this how all holidays will be in future? Is it how all families will spend time together? Can you imagine your grandchildren and their children sharing a meal like this in the year 2050 or beyond?

Which gadgets does Bond bring to Switzerland?

On Her Majesty's Secret Service is one of the more memorable Bond movies for its spectacular setting in the Swiss Alps, the location now transformed into a mountain-top revolving restaurant visited by thousands of tourists every day with a comfortable cable car service and hiking trails with breathtaking views that never become boring.

Can you imagine Bond leaving behind his gun and his skis and visiting Switzerland with a smartphone instead? Eating a pizza with one hand while using the fingertips of the other to operate an app for making drone strikes on villains, swiping through Tinder for a new girl to replace the one who died (from boredom) in his previous "adventure" and letting his gelati melt while engrossed in a downhill ski or motorcycle game in all the glory of a 5.7" 24-bit colour display?

Of course its absurd. Would you want to live like that yourself? We see more and more of it in people who are supposedly in Switzerland on the trip of a lifetime. Would you tolerate it in a movie? The mobile phone industry has paid big money to have their technology appear on the silver screen but audience feedback shows people are frustrated with movies that plaster the contents of text messages across the screen every few minutes; hopefully Bond movies will continue to plaster bullets and blood across the screen instead.

Time for freedom

How would you live for a day or a weekend or an entire holiday without your mobile phone? There are many small frustrations you may experience but the biggest one and the indirect cause of many other problems you will experience may be the inability to tell the time.

Many people today have stopped wearing a watch, relying instead upon their mobile phone to tell the time.

Without either a phone or a watch, frustration is not far away.

If you feel apprehension just at the thought of leaving your phone at home, the lack of a watch may be a subconcious factor behind your hesitation.

Trying is better than reading

Many articles and blogs give opinions about how to buy a watch, how much to spend and what you can wear it with. Don't spend a lot of time reading any of it, if you don't know where to start, simply go down to the local high street or mall and try them. Start with the most glamorous and expensive models from Swiss manufacturers, as these are what everything else is compared to and then perhaps proceed to look more widely. Vendors on Amazon and eBay now distribute a range of high quality Japanese watches, such as Orient and Invicta, at a fraction of the price of those in the stores but you still need to try a few first to identify your preferred style and case size.



Similarity of Invicta (from Amazon) and Rolex Submariner

You may not know whether you want a watch that is manually wound, automatically wound or battery operated. Buying a low-cost automatic model online could be a good way to familiarize yourself before buying anything serious. Mechanical watches have a smoother and more elegant second-hand movement and will survive the next Carrington event but may come to grief around magnets - a brief encounter with a low-cost de-gausser fixes that.

Is it smart to buy a smart watch?

If you genuinely want to have the feeling of complete freedom and control over technology, you may want to think twice about buying a smart watch. While it may be interesting to own and experiment with it some of the time, being free from your phone means being free from other electronic technology too. If you do go for a smart watch (and there are many valid reasons for trying one some of the time), maybe make it a second (or third) watch.

Smart watches are likely to be controversial for some time to come due to their impact in schools (where mobile phones are usually banned) and various privacy factors.

Help those around you achieve phone freedom in 2016

There will be further blogs on this theme during 2016, each looking at the pressures people face when with or without the mobile phone.

As a developer of communications technology myself, you may be surprised to see me encouraging people not to use it every waking minute. Working on this technology makes me more conscious of its impact on those around me and society in general.

A powerful factor to consider when talking about any communications technology is the presence of peer pressure and the behavior of those around you. Going phone-free may involve helping them to consider taking control too. Helping them out with a new watch as a gift (be careful to seek advice on the style that they are likely to prefer or ensure the purchase can be exchanged) may be an interesting way to help them engage with the idea and every time they look at the time, they may also be reminded of your concern for their freedom.

Rhonda D'Vine: Chipzel

7 January, 2016 - 21:28

Happy new year everyone! Let's start with another round of nice music, this time it is coming from Chipzel who is a great chiptune composer. Given that I'm coming from a c64 background I love chip tunes, and she does a really great job in that area. Check it out!

  • Focus: The first tune I heard and I still like it. I had it as ringtone for a while. :)
  • To The Sky: Nice one too, always set your high goals.
  • Interstellaria OST - Credits: While listening to the soundtrack I thought I might give the game a try, too.

And like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

Michal &#268;iha&#345;: Weekly phpMyAdmin contributions

7 January, 2016 - 18:12

Okay, this report is not weekly and is a bit late, but anyway here comes report covering last two weeks in 2015.

As you might expect there were some days off, but still quite some work has been done. I've focused on encoding conversions and usage of mb_* functions. One of results was cleanup PR and some opened questions. The PR is already merged meanwhile and we will probably make again the mbstring dependency options. Rest was pretty much just bug fixing.

Fixed issues:

Filed under: English phpMyAdmin | 0 comments

Enrico Zini: downgrading-network-manager

7 January, 2016 - 17:37
Downgrading network-manager

This morning I woke up. Bad idea.

I find in the work mail a compiler error that I cannot reproduce, so I need to log into a machine at work. But #809195.

I decided to downgrade network-manager. I recall there was a tool to download packages from snapshots.debian.org, I discussed it recently on IRC, let's sync the IRC logs from my server. Or not (#810212).

Never mind, I'll log into the server and grep. Ooh, it's debsnap. However, it doesn't quite do what I hoped (#667712).

After some help from #debian-devel (thanks jcristau and LebedevRI), here is how to downgrade network-manager:

# echo "deb http://snapshot.debian.org/archive/debian/20151125T155830Z/ sid main" >> /etc/apt/sources.list
# apt -o Acquire::Check-Valid-Until=false update
# apt -o Acquire::Check-Valid-Until=false install network-manager=1.0.6-1
# # Remove snapshot.debian.org from /etc/apt/sources.list
# service network-manager restart

And as user:

$ killall nm-applet
$ nm-applet &

The yak is now nice and shaved, I can now go and see what those compiler errors are all about.

Actually, no, there was still an unshaved patch on the yak, and now we have a debcya script.

Thorsten Glaser: “git find” published; test, review, fix it please

7 January, 2016 - 08:24

I just published the first version of git find on gh/mirabilos/git-find for easy collaboration. The repository deliberately only contains the script and the manual page so it can easily be merged into git.git with complete history later, should they accept it. git find is MirOS licenced. It does require a recent mksh and some common utility extensions to deal with NUL-separated lines (sort -z, grep -z, git ls-tree -z); also, support for '\0' in tr(1) and a comm(1) that does not choke on embedded NULs in lines.

To install or uninstall it, run…

	$ git clone git@github.com:mirabilos/git-find.git
	$ cd git-find
	$ sudo ln -sf $PWD/git-find /usr/lib/git-core/
	$ sudo cp git-find.1 /usr/local/share/man/man1/
	… hack …
	$ sudo rm /usr/lib/git-core/git-find \
	    /usr/local/share/man/man1/git-find.1

… then you can call it as “git find” and look at the documentation with “git help find”, as is customary.

The idea behind this utility is to have a tool like “git grep” that acts on the list of files known to git (and not e.g. ignored files) to quickly search for, say, all PNG files in the repository (but not the generated ones). “git find” acts on the index for the HEAD, i.e. whatever commit is currently checked-out (unlike “git grep” which also knows about “git add”ed files; fix welcome) and then offers a filter syntax similar to find(1) to follow up: parenthesēs, ! for negation, -a and -o for boolean are supported, as well as -name, -regex and -wholename and their case-insensitive variants, although regex uses grep(1) without (or, if the global option -E is given, with) -E, and the pattern matches use mksh(1)’s, which ignores the locale and doesn’t do [[:alpha:]] character classes yet. On the plus side, the output is guaranteed to be sorted; on the minus side, it is rather wastefully using temporary files (under $TMPDIR of course, so use of tmpfs is recommended). -print0 is the only output option (-print being the default).

Another mode “forwards” the file list to the system find; since it doesn’t support DOS-style response files, this only works if the amount of files is smaller than the operating system’s limit; this mode supports the full range (except -maxdepth) of the system find(1) filters, e.g. -mmin -1 and -ls, but it occurs filesystem access penalty for the entire tree and doesn’t sort the output, but can do -ls or even -exec.

The idea here is that it can collaboratively be improved, reviewed, fixed, etc. and then, should they agree, with the entire history, subtree-merged into git.git and shipped to the world.

Part of the development was sponsored by tarent solutions GmbH, the rest and the entire manual page were done in my vacation.

Stig Sandbeck Mathisen: Thoughts on Puppet 4 related packages

7 January, 2016 - 06:00

It’s been a while since I’ve looked at the puppet 4 packaging. During this time, a lot of things have happened.

puppet

Puppet 4 is out. I’ve done some packaging tests, and they are promising.

puppet-agent

Puppet Labs released puppet-agent recently, with instructions.

It looks like an umbrella project for building and bundling puppet, facter, augeas, ruby, and everything else to install on puppet managed nodes.

It downloads a lot of software over HTTP or git, and verifies at least some of it with md5 checksums. I don’t think this is needed for Debian.

puppet-server

This really needs packaging. There are probably lots of dependencies not packaged yet. I am not familiar with any of the build tools.

puppetdb

Needs packaging. There are probably lots of dependencies not packaged yet. I am not familiar with any of the build tools.

vim-puppet

This has moved outside the puppet repository.

Both puppetlabs and rodjek have vim modes for puppet. I’ve grown very fond of rodjek’s vim-puppet.

puppet-el

This has moved outside the puppet repository.

Both puppetlabs and lunaryorn have emacs modes for puppet.

puppet modules

There are a lot of puppet modules in the team VCS.

Steinar H. Gunderson: IPv6 non-alternatives: DJB's article, 13 years later

7 January, 2016 - 02:54

With the world passing 10% IPv6 penetration over the weekend, we see the same old debates coming up again; people claiming IPv6 will never happen (despite several years now of exponential growth!), and that if they had only designed it differently, it would have been all over by now.

In particular, people like to point to a 2002–3 article by D. J. Bernstein, complete with rants about how Google would never set up “useless IPv6 addresses” (and then they did that in 2007—I was involved). It's difficult to understand exactly what the article proposes since it's heavy on calling people idiots and light on actual implementation details (as opposed to when DJB's gotten involved in other fields; e.g. thanks to him we now have elliptical curve crypto that doesn't suck, even if the reference implementation was sort of a pain to build), but I will try to go through it nevertheless and show how I cannot find any way it would work well in practice.

One thing first, though: Sorry, guys, the ship has sailed. Whatever genius solution DJB may have thought up that I'm missing, and whatever IPv6's shortcomings (they're certainly there), IPv6 is what we have. By now, you can not expect anything else to arise and take over the momentum; we will either live with IPv6 or die with IPv4.

So, let's see what DJB says. As far as I can see, his primary call is for a version of IPv6 where the address space is an “extension” of the IPv4 space. For sake of discussion, let's call that “IPv4+”, although it would share a number of properties with IPv6. In particular, his proposal requires changing the OS and other software on every single end host out there, just as IPv6; he readily admits that and outlines how it's done in rough terms (change all structs, change all configuration files, change all databases, change all OS APIs, etc.). From what I can see, he also readily admits that IPv4 and IPv4+ hosts cannot talk to each other, or more clearly, we cannot start using the extended address space before almost everybody has IPv4+ capable software. (E.g., quote: “Once these software upgrades have been done on practically every Internet computer, we'll have reached the magic moment: people can start relying on public IPv6 addresses as replacements for public IPv4 addresses.”)

So, exactly how does the IPv4 address space fit into the IPv4+ address space? The article doesn't really say anything about this, but I can imagine only two strategies: Build the IPv4+ space around the IPv4 space (ie., the IPv4 space occupies a little corner of the IPv4+ space, similar to how v4-mapped addresses are used within software—but not on the wire–today, to let applications do unified treatment of IPv4 addresses as a sort of special IPv6 address), or build it as a hierarchical extension.

Let's look at the former first; one IPv4 address gives you one IPv4+ addresses. Somehow this seems to give you all the disadvantages of IPv4 and all the disadvantages of IPv6. The ISP is not supposed to give you any more IPv4+ addresses (or at least DJB doesn't want to contact his ISP about more—also saying that the fact that automatic address distribution does not change his argument), so if you have one, you're stuck with one. So you still need NAT. (DJB talks about “proxies”, but I guess that the way things evolved, this either actually means NAT, or it talks about the practice of application-level proxies such as Squid or SOCKS proxies to reach the Internet, which really isn't commonplace anymore, so I'll assume for the sake of discussion it means NAT.)

However, we already do NAT. The IPv4 crunch happened despite ubiquitous NAT everywhere; we're actually pretty empty. So we will need to hand out IPv4+ addresses at the very least to new deployments, and also probably reconfigure every site that wants to expand and is out of IPv4 addresses. (“Site” here could mean any organizational unit, such as if your neighborhood gets too many new subscribers for your ISP's local addressing scheme to have enough addresses for you.)

A much more difficult problem is that we now need to route these addresses on the wire. Ironically, the least clear part of DJB's plan is step 1, saying we will “extend the format of IP packets to allow 16-byte addresses”; how exactly will this happen? For this scheme, I can only assume some sort of IPv4 option that says “the stuff in the dstaddr field is just the start and doesn't make sense as an IPv4 address on its own; here are the remaining 12 bytes to complete the IPv4+ address”. But now your routers need to understand that format, so you cannot do with only upgrading the end hosts; you also need to upgrade every single router out there, not just the end hosts. (Note that many of these do routing in hardware, so you can't just upgrade the software and call it a day.) And until that's done, you're exactly in the same situation as with IPv4/IPv6 today; it's incompatible.

I do believe this option is what DJB talks about. However, I fail to see exactly how it is much better than the IPv6 we got ourselves into; you still need to upgrade all software on the planet and all routers on the planet. The benefit is supposedly that a company or user that doesn't care can just keep doing nothing, but they do need to care, since they need to upgrade 100% of their stuff to understand IPv4+ before we can start even deploying it alongside IPv4 (in contrast with IPv6, where we now have lots of experience in running production networks). The single benefit is that they won't have to renumber—until they need to grow, at which point they need to anyway.

However, let me also discuss the other possible interpretation, namely that of the IPv4+ address space being an extension of IPv4, ie. if you have 1.2.3.4 in IPv4, you have 1.2.3.4.x.x.x.x or similar in IPv4+. (DJB's article mentions 128-bit addresses and not 64-bit, though; we'll get to that in a moment.) People keep bringing this up, too; it's occasionally been called “BangIP” (probably jokingly, as in this April Fool's joke) due to the similarity with how explicit mail routing would work before SMTP became commonplace. I'll use that name, even though others have been proposed.

The main advantage of BangIP is that you can keep your Internet core routing infrastructure. One way or the other, they will keep seeing IPv4 addresses and IPv4 packets; you need no new peering arrangements etc.. The exact details are unclear, though; I've seen people suggest GRE tunneling, ignoring problems they have through NAT, and I've seen suggestions of IPv4 options for source/destination addresses, also ignoring that someting as innocious as setting the ECN bits has been known to break middleboxes left and right.

But let's assume you can pull that off, because your middlebox will almost certainly need to be the point that decapsulates BangIP anyway and converts it to IPv4 on the inside, presumably with a 10.0.0.0/8 address space so that your internal routing can keep using IPv4 without an IPv4+ forklift upgrade. (Note that you now lose the supposed security benefit of NAT, by the way, although you could probably encrypt the address.) Of course, your hosts will need to support IPv4+ still, and you will need some way of communicating that you are on the inside of the BangIP boundary. And you will need to know what the inside is, so that when you communicate on this side, you'll send IPv4 and not IPv4+. (For a home network with no routing, you could probably even just do IPv4+ on the inside, although I can imagine complications.)

But like I wrote above, experience has shown us that 32 extra bits isn't enough. One layer of NAT isn't doing it, we need two. You could imagine the inter-block routability of BangIP helping a fair bit here (e.g., a company with too many machines for 10.0.0.0/8 could probably easily get more addresses for more external IPv4 addresses, yielding 10.0.0.0/8 blocks), but ultimately, it is a problem that you chop the Internet off in two distinct halves that work very differently. My ISP will probably want to use BangIP for itself, meaning I'm on the “outside” of the core; how many of those extra bits will they allocate for me? Any at all?

Having multiple levels of bang sounds like pain; effectively we're creating a variable-length address. Does anyone ever want that? From experience, when we're creating protocols with variable-length addresses, people just tend to use the maximum level anyway, so why not design it with 128-bit to begin with? (The original IP protocol proposals actually had variable-length addresses, by the way.) So we can create our “32/96 BangIP”, where the first 32 bits are for the existing public Internet, and then every IPv4 address gives you a 2^96 addresses to play with. (In a sense, it reminds me of 6to4, which never worked very well and is now thankfully dead.)

However, this makes the inside/outside-core problem even worse. I now need two very different wire protocols coexisting on the Internet; IPv4+ (which looks like regular IPv4 to the core) for the core, and a sort of IPv4+-for-the-outside (similar to IPv6) outside it. If I build a company network, I need to make sure all of my routers are IPv4+-for-the-outside and talk that, while if I build the Internet core, I need to make sure all of my connections are IPv4 since I have no guarantee that I will be routable on the Internet otherwise. Furthermore, I have a fixed prefix that I cannot really get out of, defined by my IPv4 address(es). This is called “hierarchical routing”, and the IPv6 world gave it up relatively early despite it sounding like a great idea at first, because it makes multihoming a complete pain: If I have an address 1.2.3.4 from ISP A and 5.6.7.8 from ISP B, which one do I use as the first 32 bits of my IPv4+ network if I want it routable on the public Internet? You could argue that the solution for me is to get an IPv4 PI netblock (supposedly a /24, since we're not changing the Internet core), but we're already out of those, which is why we started this thing to begin with. Furthermore, if the IPv4/IPv4+ boundary is above my immediate connection to the Internet (say, ISP A doesn't have an IPv4 address, just IPv4+), I'm pretty hosed; I cannot announce an IPv4 netblock in BGP. The fact that the Internet runs on largely the same protocol everywhere is a very nice thing; in contrast, what is described here really would be a mess!

So, well. I honestly don't think it's as easy to just do “extension instead of alternative” when it comes to the address spaces. We'll just need to deal with the pain and realize that upgrading the equipment and software is the larger part of the job anyway, and we'll need to do that no matter what solution we go with.

Congrats on reaching 10%! Now get to work with the remaining 90%.

Ingo Juergensmann: Letsencrypt: challenging challenges

7 January, 2016 - 00:56

On December 3rd 2015 the Letsencrypt project went to public beta and this is a good thing! Having more and more websites running on good and valid SSL certificates for their HTTPS is a good thing, especially because Letsencrypt takes care of renewing the certs every now and then. But there are still some issues with Letsencrypt. Some people criticize the Python client needing root priviledges and such. Others complain that Letsencrypt currently only supports webservers.

Well, I think for a public beta this is what we could have expected from the start: the Letsencrypt project focussed on a reference implementation and there are already other implementations being available. But one thing seems to a problem within the design of how Letsencrypt works as it uses a challenge response method to verify that the requesting user is controlling the domain for which the certificate shall be issued. This might work well in simple deployments, but what about a little more complex setups like multiple virtual machines and different protocols involved.

For example: you're using domain A for your communication like user@example.net for your mail, XMPP and SIP. Your mailserver runs on one virtual machine, whereas the webserver is running on a different virtual machine. The same for XMPP and SIP: a seperate VM as well. 

Usually the Letsencrypt approach would be that you configure your webserver (by configure /.well-known/acme-challenge/* location or use a standalone server on port 443) to handle the challenge response requests. This would give you a SSL cert for your webserver example.net. Of course you could copy this cert to your mail-, XMPP- and SIP-server, but then again you have to do this everytime the SSL cert gets renewed.

Another challenge is of course that you are not only have one or two domains, but a bunch of domains. In my case I host approximately >60 domains. Basically the mail for all domains are handled by my mailserver running on its own virtual machine. The webserver is located on a different VM. For some domains I offer XMPP accounts on a third VM.

What is the best way to solve this problem? Moving everything to just one virtual machine? Naaah! Writing some scripts to copy the certs as needed? Not very smart as well. Using a network share for the certs between all VMs? Hmmm... would that work?

And what about TLSA entries of your DNSSEC setup? When a SSL cert is renewed than the fingerprint might need an update in your DNS zone - for several protocols like mail, XMPP, SIP and HTTPS. At least the Bash implementation of Letsencrypt offers a "hook" which is called after the SSL cert has been issued.

What are you ways to deal with this kind of handling the ACME protocol challenges and multi-domain, multi-VM setup?

Kategorie: DebianTags: DebianLetsEncryptLinuxSoftware 

Daniel Pocock: Want to use free software to communicate with your family in Christmas 2016?

6 January, 2016 - 19:25

Was there a friend or family member who you could only communicate with using a proprietary, privacy-eroding solution like Skype or Facebook this Christmas?

Would you like to be only using completely free and open solutions to communicate with those people next Christmas?

Developers

Even if you are not developing communications software, could the software you maintain make it easier for people to use "sip:" and "xmpp:" links to launch other applications? Would this approach make your own software more convenient at the same time? If your software already processes email addresses or telephone numbers in any way, you could do this.

If you are a web developer, could you make WebRTC part of your product? If you already have some kind of messaging or chat facility in your website, WebRTC is the next logical step.

If you are involved with the Debian or Fedora projects, please give rtc.debian.org and FedRTC.org a go and share your feedback.

If you are involved with other free software communities, please come to the Free-RTC mailing list and ask how you can run something similar.

Everybody can help

Do you know any students who could work on RTC under Google Summer of Code, Outreachy or any other student projects? We are particularly keen on students with previous experience of Git and at least one of Java, C++ or Python. If you have contacts in any universities who can refer talented students, that can also help a lot. Please encourage them to contact me directly.

In your workplace or any other organization where you participate, ask your system administrator or developers if they are planning to support SIP, XMPP and WebRTC. Refer them to the RTC Quick Start Guide. If your company web site is built with the Drupal CMS, refer them to the DruCall module, it can be installed by most webmasters without any coding.

If you are using Debian or Ubuntu in your personal computer or office and trying to get best results with the RTC and VoIP packages on those platforms, please feel free to join the new debian-rtc mailing list to discuss your experiences and get advice on which packages to use.

Everybody is welcome to ask questions and share their experiences on the Free-RTC mailing list.

Please also come and talk to us at FOSDEM 2016, where RTC is in the main track again. FOSDEM is on 30-31 January 2016 in Brussels, attendance is free and no registration is necessary.

This mission can be achieved with lots of people making small contributions along the way.

Norbert Preining: TeX Live security improvements

6 January, 2016 - 11:43

Today I committed a set of changes to the TeX Live subversion repository that should pave the way for better security handling in the future. Work is underway to use strong cryptographic signatures to verify that packages downloaded and installed into a TeX Live installation have not been tinkered with.

While there is still a long way to go and to figure out, the current changes already improve the situation considerably.

Status up to now

Although we did ship size and checksum information within the TeX Live database, these information were only considered by the installer when re-starting an installation to make sure that the downloaded packages are the ones we should use.

Neither the installer nor tlmgr did use the checksum to verify that the downloaded packages is correct, relying mostly on the fact that the packages are xz-compressed and would create rubbish when there is a transfer error.

Although none of us believes that there is a serious interest in tinkering with the TeX Live distribution – maybe to steal just another boring scientific paper? – the door was still open.

Changes implemented

The changes committed to the repository today, which will be in a testing phase to get rid of bugs, consists of the following:

  • unification of installation routines: it didn’t make sense to have duplication of the actual download and unpack code in the installer and in tlmgr, so the code base was simplified and unified, and both the installer and tlmgr now use the same code paths to obtain, unpack, and install a package.
  • verification of size and checksum data: hand-in-hand with the above change, verification of downloaded packages based on both the size as well as the checksum is now carried out.

Together these two changes will allow install-tl/tlmgr to verify that a package (.tar.xz) is according to the information in the accompanying texlive.tlpdb. This still leaves the option of tampering with a package and updating the texlive.tlpdb for the fixes checksums/sizes.

Upcoming changes

For the future we do plan mostly two things:

  • switch to stronger hashing algorithm: till now we use md5, but we will switch to sha512 instead.
  • GnuPG signing of the checksum file of the texlive.tlpdb, that is detached signature of texlive.tlpdb.checksum

The last step above will give a very high level of security, as it will be not practically possible to alter the information in the texlive.tlpdb, thus no tampering with the checksum information of the containers, and in turn no tampering with the actual containers will be possible.

Restrictions

Due to the wide range of supported architectures and operating systems, we will not make verification obligatory, but if a gpg binary is found, it will be used to verify the downloaded texlive.tlpdb.checksum.

Details have to be hammered out and the actual programming has to be done, but we are on the way.

Enjoy.

Gunnar Wolf: Starting work / call for help: Debianizing Drupal 8

6 January, 2016 - 09:39

I have helped maintain Drupal 7.x in Debian for several years (and am now the leading maintainer). During this last year, I got a small article published in the Drupal Watchdog, where I stated:

What About Drupal 8?

Now... With all the hype set in the future, it might seem striking that throughout this article I only talked about Drupal 7. This is because Debian seeks to ship only production­ ready software: as of this writing, Drupal 8 is still in beta. Not too long ago, we still saw internal reorganizations of its directory structure.

Once Drupal 8 is released, of course, we will take care to package it. And although Drupal 8 will not be a part of Debian 8, it will be offered through the Backports archive, following all of Debian's security and stability requirements.

Time passes, and I have to take care of following what I promise. Drupal 8 was released on November 18, so I must get my act together and put it in shape for Debian!

So, prompted by a mail by a fellow Debian Developer, I pushed today to Alioth the (very little) work I have so far done to this effect; every DD can now step in and help (and I guess DMs and other non-formally-affiliated contributors, but I frankly haven't really read how you can be a part of collab-maint).

So, what has been done? What needs to be done?

Although the code bases are massively different, I took the (un?)wise step to base off the Drupal7 packaging, and start solving Lintian problems and installation woes. So far, I have an install that looks sane (but has not yet been tested), but has lots of Lintian warnings and errors. The errors are mostly of missing sources, as Drupal8 inlines many unrelated projects (fortunately documented and frozen to known-good versions) in it; there are two possible courses of action:

  1. Prefered way: Find which already made Debian package provides each of them, remove from the binary package, declare dependency.
  2. Pragmatic way: As the compatibility must sometimes be to a specific version level, just provide the needed sources in debian/missing-sources

We can, of course, first go pragmatic and later start reviewing what can be safely depended upon. But for this, we need people with better PHP experience than me (which is not much to talk about). This cleanup will correspond with cleaning up the extra license file Lintian warnings, as there is one for each such project — Of course, documenting each such license in debian/copyright is also a must.

Anyway, there is quite a bit of work to do. And later, we have to check that things work reliably. And also, quite probably, adapt any bits of dh-make-drupal to work with v8 as well as v7 (and I am not sure if I already deprecated v6, quite probably I have).

So... If you are interested in working on this, please contact me directly. If we get more than a handful, building a proper packaging team might be in place, otherwise we can just go on working as part of collab-maint.

I am very short on time, so any extra hands will be most helpful!

Lior Kaplan: PHP 5 Support Timeline

6 January, 2016 - 06:52

With the new year starting the PHP project is being asked to decide about the PHP 5 support timeline.

While Aligning PHP 5.6 support timeline with the release date of PHP 7.0 seems like common sense to keep the support schedule continuous, there’s a big question whether to extend it further to an additional one year of security support till the end of 2018. This would make PHP 5.6, the last of the PHP 5 branch, to have 2 years of security support and de facto getting the same life span as PHP 7.0 would (ending support of both in Dec 2018).

But beside of support issues, this also affects adoption rate of PHP 7.0, serving as a catalyst due to end of support for the PHP 5 branch. My concerns are that with the additional security support the project would need to deal with the many branches (5.6, 7.0, 7.1, 7.2 and then to be 7.3 branch).

I think we should limit what we guarantee (meaning keeping only one year of security support till end of 2017), and encourage project members and the eco-system (e.g. Linux distributions) to maintain further security based on best effort.

This is already the case for out of official support releases like the 5.3 and 5.4 branches (examples for backports done by Debian: 5.3 and 5.4). And of course, we also have companies that make their money out of long term support (e.g. RedHat).

On the other hand, we should help the eco system in doing such extended support, and hosting backported fixes in the project’s git repo instead of having each Linux disto do the same patch work on its own.


Filed under: PHP

Daniel Pocock: Promoting free software and free communications on popular social media networks

5 January, 2016 - 21:27

Sites like Twitter and Facebook are not fundamentally free platforms, despite the fact they don't ask their users for money. Look at how Facebook's censors confused Denmark's mermaid statue with pornography or how quickly Twitter can make somebody's account disappear, frustrating public scrutiny of their tweets and potentially denying access to vital information in their "direct message" mailbox. Then there is the fact that users don't get access to the source code, users don't have a full copy of their own data and, potentially worst of all, if most people bothered to read the fine print of the privacy policy they would find it is actually a recipe for downright creepiness.

Nonetheless, a significant number of people have accounts in these systems and are to some extent contactable there.

Many marketing campaigns that have been successful today, whether they are crowdfunding, political activism or just finding a lost cat claim to have had great success because of Twitter or Facebook. Is this true? In reality, many users of those platforms follow hundreds of different friends and if they only check-in once a day, filtering algorithms show them only a small subset of what all their friends posted. Against these odds, just posting your great idea on Facebook doesn't mean that more than five people are actually going to see it. Those campaigns that have been successful have usually had something else going in their favour, perhaps it was a friend working in the media who gave their campaign a plug on his radio show or maybe they were lucky enough to be slashdotted. Maybe it was having the funds for a professional video production with models who pass off as something spontaneous. The use of Facebook or Twitter alone did not make such campaigns successful, it was just part of a bigger strategy where everything fell into place.

Should free software projects, especially those revolving around free communications technology, use such platforms to promote themselves?

It is not a simple question. In favour, you could argue that everything we promote through public mailing lists and websites is catalogued by Google anyway, so why not make it easier to access for those who are on Facebook or Twitter? On top of that, many developers don't even want to run their own mail server or web server any more, let alone a self-hosted social-media platform like pump.io. Even running a basic SIP proxy server for the large Debian and Fedora communities involved a lot of discussion about the approach to support it.

The argument against using Facebook and Twitter is that you are shooting yourself in the foot, when you participate in those networks, you give them even more credibility and power (which you could quantify using Metcalfe's law). The Metcalfe value of their network, being quadratic rather than linear, shoots ahead of the Metcalfe value of your own solution, putting your alternative even further out of reach. On top of that, the operators of the closed platform are able to evaluate who is responding to your message and how they feel about it and use that intelligence to further undermine you.

How do you feel about this choice? How and when should free software projects and their developers engage with mainstream social media technology? Please come and share your ideas on the Free-RTC mailing list or perhaps share and Tweet them.

Benjamin Mako Hill: Celebrate Aaron Swartz in Seattle (or Atlanta, Chicago, Dallas, NYC, SF)

5 January, 2016 - 08:07

I’m organizing an event at the University of Washington in Seattle that involves a reading, the screening of a documentary film, and a Q&A about Aaron Swartz. The event coincides with the third anniversary of Aaron’s death and the release of a new book of Swartz’s writing that I contributed to.

The event is free and open the public and details are below:

WHEN: Wednesday, January 13 at 6:30-9:30 p.m.

WHERE: Communications Building (CMU) 120, University of Washington

We invite you to celebrate the life and activism efforts of Aaron Swartz, hosted by UW Communication professor Benjamin Mako Hill. The event is next week and will consist of a short book reading, a screening of a documentary about Aaron’s life, and a Q&A with Mako who knew Aaron well – details are below. No RSVP required; we hope you can join us.

Aaron Swartz was a programming prodigy, entrepreneur, and information activist who contributed to the core Internet protocol RSS and co-founded Reddit, among other groundbreaking work. However, it was his efforts in social justice and political organizing combined with his aggressive approach to promoting increased access to information that entangled him in a two-year legal nightmare that ended with the taking of his own life at the age of 26.

January 11, 2016 marks the third anniversary of his death. Join us two days later for a reading from a new posthumous collection of Swartz’s writing published by New Press, a showing of “The Internet’s Own Boy” (a documentary about his life), and a Q&A with UW Communication professor Benjamin Mako Hill – a former roommate and friend of Swartz and a contributor to and co-editor of the first section of the new book.

If you’re not in Seattle, there are events with similar programs being organized in Atlanta, Chicago, Dallas, New York, and San Francisco.  All of these other events will be on Monday January 11 and registration is required for all of them. I will be speaking at the event in San Francisco.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้