Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 12 min 47 sec ago

Matthew Garrett: Linux Foundation quietly drops community representation

21 January, 2016 - 06:21
The Linux Foundation is an industry organisation dedicated to promoting, protecting and standardising Linux and open source software[1]. The majority of its board is chosen by the member companies - 10 by platinum members (platinum membership costs $500,000 a year), 3 by gold members (gold membership costs $100,000 a year) and 1 by silver members (silver membership costs between $5,000 and $20,000 a year, depending on company size). Up until recently individual members ($99 a year) could also elect two board members, allowing for community perspectives to be represented at the board level.

As of last Friday, this is no longer true. The by-laws were amended to drop the clause that permitted individual members to elect any directors. Section 3.3(a) now says that no affiliate members may be involved in the election of directors, and section 5.3(d) still permits at-large directors but does not require them[2]. The old version of the bylaws are here - the only non-whitespace differences are in sections 3.3(a) and 5.3(d).

These changes all happened shortly after Karen Sandler announced that she planned to stand for the Linux Foundation board during a presentation last September. A short time later, the "Individual membership" program was quietly renamed to the "Individual supporter" program and the promised benefit of being allowed to stand for and participate in board elections was dropped (compare the old page to the new one). Karen is the executive director of the Software Freedom Conservancy, an organisation involved in the vitally important work of GPL enforcement. The Linux Foundation has historically been less than enthusiastic about GPL enforcement, and the SFC is funding a lawsuit against one of the Foundation's members for violating the terms of the GPL. The timing may be coincidental, but it certainly looks like the Linux Foundation was willing to throw out any semblance of community representation just to ensure that there was no risk of someone in favour of GPL enforcement ending up on their board.

Much of the code in Linux is written by employees paid to do this work, but significant parts of both Linux and the huge range of software that it depends on are written by community members who now have no representation in the Linux Foundation. Ignoring them makes it look like the Linux Foundation is interested only in promoting, protecting and standardising Linux and open source software if doing so benefits their corporate membership rather than the community as a whole. This isn't a positive step.

[1] Article II of the bylaws
[2] Other than in the case of the TAB representative, an individual chosen by a board elected via in-person voting at a conference

comments

Scott Kitterman: Python Packaging Build-Depends

21 January, 2016 - 04:33

As a follow-up to my last post where I discussed common Python packaging related errors, I thought it would be worth to have a separate post on how to decide on build-depends for Python (and Python3) packages.

The python ecosystem has a lot of packages built around supporting multiple versions of python (really python3 now) in parallel.  I’m going to limit this post to packages you might need to build-depend on directly.

Python (2)

Since Jessie (Debian 8), python2.7 has been the only supported python version.  For development of Stretch and backports to Jessie there is no need to worry about multiple python versions.  As a result, several ‘all’ packages are (and will continue to be) equivalent to their non-‘all’ counterparts.  We well continue to provide the ‘all’ packages for backward compatibility, but they aren’t really needed any more.

python (or python-all)

This is the package to build-depend on if your package is pure Python (no C extensions) and does not for some other reason need access to the Python header files (there are a handful of packages this latter caveat applies to, if you don’t know if it applies to your package, it almost certainly doesn’t.

You should also build-depend on dh-python.  It was originally shipped as part of the python package (and there is still an old version provided), but to get the most current code with new bug fixes and features, build-depend on dh-python.

python-dev (or python-all-dev)

If your package contains compiled C or C++ extensions, this package either provides or depends on the packages that provide all the header files you need.

Do not also build-depend on python.  python-dev depends on it and it is just an unneeded redundancy.

python-dbg (or python-all-dbg)

Add this if you build a -dbg package (not needed for -dbgsym).

Other python packages

There is not, AFAICT, any reason to build-dep on any of the other packages provided (e.g. libpython-dev).  It is common to see things like python-all, python, python-dev, libpython-dev in build-depends.  This could be simplified just to python-all-dev since it will pull the rest in.

Python3

Build-depends selection for Python 3 is generally similar, except that we continue to want to be able to support multiple python3 versions (as we currently support python3.4 and python3.5).  There are a few differences:

All or not -all

Python3 transitions are much easier when C extensions are compiled for all supported versions.  In many cases all that’s needed if you use pybuild is to build-depend on python3-all-dev.  While this is preferred, in many cases this would be technically challenging and not worth the trouble.  This is mostly true for python3 based applications.

Python3-all is mostly useful for running test suites against all supported python3 versions.

Transitions

As mentioned in the python section above, build-depends on python3-{all-}dev is generally only needed for compiled C extensions.  For python3 these are also the packages that need to be rebuilt for a transition.  Please avoid -dev build-depends whenever possible for non-compiled packages.  Please keep your packages that do need rebuilding binNMU safe.

Transitions happen in three stages:

  1. A new python3 version is added to supported python3 versions and packages that need rebuilding due to compiled code and that support multiple versions are binNMUed to add support for the new version.
  2. The default python3 is changed to be the new version and packages that only support a single python3 version are rebuilt.
  3. The old python3 version is dropped from supported versions and packages will multiple-version support are binNMUed to remove support for the dropped version.

This may seem complex (OK, it is a bit), but it enables a seamless transition for packages with multi-version support since they always support the default version.  For packages that only support a single version there is an inevitable period when they go uninstallable once the default version has changed and until they can be rebuilt with the new default.

Specific version requirements

Please don’t build-depend against specific python3 versions.  Those don’t show up in the transition tracker.  Use X-Python3-Version (see python policy for details) to specify the version you need.

Summary

Please check your packages and only build-depend on the -dev packages when you need it.  Check for redundancy and remove it.  Try and build for all python3 versions.  Don’t build-depend on specific python3 versions.

Steve Kemp: So life in Finland goes on

20 January, 2016 - 22:50

So after living here in Finland for 6 months I've now bought a flat.

We have a few days to sort out mortgage paperwork, and assuming there are no problems we'll be moving into the new place on/around the 1st of March.

Finally I'll be living in Finland, with a sauna of my very own.

Interesting times.

In more developer-friendly news I made a new release of Lumail with the integrated support for IMAP. Let us hope people like it.

Craig Sanders: lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0

20 January, 2016 - 18:49

I had to replace a motherboard and CPU a few days ago (bought an Asus M5A97 R2.0), and wanted to get lm-sensors working properly on it. Got it working eventually, which was harder than it should have been because the lm-sensors site is MIA, seems to have been rm -rfed.

For anyone else with this motherboard, the config is included below.

This inspired me to fix the config for my Asus Sabertooth 990FX motherboard. Also included below.

# Asus M5A97 R2.0
# based on Asus M5A97 PRO from http://blog.felipe.lessa.nom.br/?p=93

chip "k10temp-pci-00c3"
     label temp1 "CPU Temp (rel)"

chip "it8721-*"
     label  in0 "+12V"
     label  in1 "+5V"
     label  in2 "Vcore"
     label  in2 "+3.3V"
     ignore in4
     ignore in5
     ignore in6
     ignore in7

     ignore fan3

     compute in0  @ * (515/120), @ / (515/120)
     compute in1  @ * (215/120), @ / (215/120)

     label temp1 "CPU Temp"
     label temp2 "M/B Temp"

     set temp1_min 30
     set temp1_max 70

     set temp2_min 30
     set temp2_max 60


     label fan1 "CPU Fan"
     label fan2 "Chassis Fan"

     label fan3 "Power Fan"
     ignore temp3

     set in0_min  12 * 0.95
     set in0_max  12 * 1.05

     set in1_min  5 * 0.95
     set in1_max  5 * 1.05

     set in3_min  3.3 * 0.95
     set in3_max  3.3 * 1.05

     ignore intrusion0
#Asus Sabertooth 990FX
# modified from the version at http://www.spinics.net/lists/lm-sensors/msg43352.html

chip "it8721-isa-0290"

# Temperatures
    label temp1  "CPU Temp"
    label temp2  "M/B Temp"
    label temp3  "VCORE-1"
    label temp4  "VCORE-2"
    label temp5  "Northbridge"         # I put all these here as a reference since the
    label temp6  "DRAM"                # Asus Thermal Radar tool on my Windows box displays
    label temp7  "USB3.0-1"            # all of them.
    label temp8  "USB3.0-2"            # lm-sensors ignores all but the CPU and M/B temps.
    label temp9  "PCIE-1"              # If that is really what they are.
    label temp10 "PCIE-2"

    set temp1_min 0
    set temp1_max 70

    set temp2_min 0
    set temp2_max 60

    ignore temp3

# Fans
    label fan1 "CPU Fan"
    label fan2 "Chassis Fan 1"
    label fan3 "Chassis Fan 2"
    label fan4 "Chassis Fan 3"
#    label fan5 "Chassis Fan 4"      # lm-sensor complains about this

    ignore fan2
    ignore fan3

    set fan1_min 600
    set fan2_min 600
    set fan3_min 600

# Voltages
    label in0 "+12V"
    label in1 "+5V"
    label in2 "Vcore"
    label in3 "+3.3V"
    label in5 "VDDA"


    compute  in0  @ * (50/12), @ / (50/12)
    compute  in1  @ * (205/120), @ / (205/120)

    set in0_min  12 * 0.95
    set in0_max  12 * 1.05

    set in1_min  5 * 0.95
    set in1_max  5 * 1.05

    set in2_min  0.80
    set in2_max  1.6

    set in3_min  3.20
    set in3_max  3.6

    set in5_min  2.2
    set in5_max  2.8

    ignore in4
    ignore in6
    ignore in7

    ignore intrusion0

chip "k10temp-pci-00c3"
     label temp1 "CPU Temp"

lm-sensors configs for Asus Sabertooth 990FX and M5A97 R2.0 is a post from: Errata

Michal Čihař: python-gammu 2.5

20 January, 2016 - 12:00

It has been quite some time since last python-gammu release and it's time to push fixes to the users.

This is really just a bugfix release collecting minor fixes and fixes testsuite with recent Gammu versions.

Full list of changes:

  • Compatibility with Gammu >= 1.36.7

Filed under: English Gammu python-gammu Wammu | 0 comments

Joey Hess: git-annex v6

20 January, 2016 - 00:28

Version 6 of git-annex, released last week, adds a major new feature; support for unlocked large files that can be edited as usual and committed using regular git commands.

For example:

git init
git annex init --version=6
mv ~/foo.iso .
git add foo.iso
git commit -m "added hundreds of megabytes to git annex (not git)"
git remote add origin ssh://sever/dir
git annex sync origin --content # uploads foo.iso

Compare that with how git-annex has worked from the beginning, where git annex add is used to add a file, and then the file is locked, preventing further modifications of it. That is still a very useful way to use git-annex for many kinds of files, and is still supported of course. Indeed, you can easily switch files back and forth between being locked and unlocked.

This new unlocked file mode uses git's smudge/clean filters, and I was busy developing it all through December. It started out playing catch-up with git-lfs somewhat, but has significantly surpassed it now in several ways.

So, if you had tried git-annex before, but found it didn't meet your needs, you may want to give it another look now.

Now a few thoughts on git-annex vs git-lfs, and different tradeoffs made by them.

After trying it out, my feeling is that git-lfs brings an admirable simplicity to using git with large files. File contents are automatically uploaded to the server when a git branch is pushed, and downloaded when a branch is merged, and after setting it up, the user may not need to change their git workflow at all to use git-lfs.

But there are some serious costs to that simplicity. git-lfs is a centralized system. This is especially problimatic when dealing with large files. Being a decentralized system, git-annex has a lot more flexability, like transferring large file contents peer-to-peer over a LAN, and being able to choose where large quantities of data are stored (maybe in S3, maybe on a local archive disk, etc).

The price git-annex pays for this flexability is you have to configure it, and run some additional commands. And, it has to keep track of what content is located where, since it can't assume the answer is "in the central server".

The simplicity of git-lfs also means that the user doesn't have much control over what files are present in their checkout of a repository. git-lfs downloads all the files in the work tree. It doesn't have facilities for dropping the content of some files to free up space, or for configuring a repository to only want to get a subset of files in the first place. On the other hand, git-annex has excellent support for alll those things, and this comes largely for free from its decentralized design.

If git has showed us anything, it's perhaps that a little added complexity to support a fully distributed system won't prevent people using it. Even if many of them end up using it in a mostly centralized way. And that being decentralized can have benefits beyond the obvious ones.

Oh yeah, one other advantage of git-annex over git-lfs. It can use half as much disk space!

A clone of a git-lfs repository contains one copy of each file in the work tree. Since the user can edit that file at any time, or checking out a different branch can delete the file, it also stashes a copy inside .git/lfs/objects/.

One of the main reasons git-annex used locked files, from the very beginning, was to avoid that second copy. A second local copy of a large file can be too expensive to put up with. When I added unlocked files in git-annex v6, I found it needed a second copy of them, same as git-lfs does. That's the default behavior. But, I decided to complicate git-annex with a config setting:

git config annex.thin true
git annex fix

Run those two commands, and now only one copy is needed for unlocked files! How's it work? Well, it comes down to hard links. But there is a tradeoff here, which is why this is not the default: When you edit a file, no local backup is preserved of its old content. So you have to make sure to let git-annex upload files to another repository before editing them or the old version could get lost. So it's a tradeoff, and maybe it could be improved. (Only thin out a file after a copy has been uploaded?)

This adds a small amount of complexity to git-annex, but I feel it's well worth it to let unlocked files use half the disk space. If the git-lfs developers are reading this, that would probably be my first suggestion for a feature to consider adding to git-lfs. I hope for more opportunities to catch-up to git-lfs in turn.

Jan Wagner: Trying icinga2 and icingaweb2 with Docker

19 January, 2016 - 15:44

In case you ever wanted to look at Icinga2, even into distributed features, without messing with installing whole server setups, this might interesting for you.

At first, you need to have a running Docker on your system. For more information, have a look into my previous post!

Initiating Docker images
$ git clone https://github.com/joshuacox/docker-icinga2.git && \
  cd docker-icinga2
$ make temp
[...]
$ make grab
[...]
$ make prod
[...]
Setting IcingaWeb2 password

(Or using the default one)

$ make enter
docker exec -i -t `cat cid` /bin/bash  
root@ce705e592611:/# openssl passwd -1 f00b4r  
$1$jgAqBcIm$aQxyTPIniE1hx4VtIsWvt/
root@ce705e592611:/# mysql -h mysql icingaweb2 -p -e \  
  "UPDATE icingaweb_user SET password_hash='$1$jgAqBcIm$aQxyTPIniE1hx4VtIsWvt/' WHERE name='icingaadmin';"
Enter password:  
root@ce705e592611:/# exit  
Setting Icinga Classic UI password
$ make enter
docker exec -i -t `cat cid` /bin/bash  
root@ce705e592611:/# htpasswd /etc/icinga2-classicui/htpasswd.users icingaadmin  
New password:  
Re-type new password:  
Adding password for user icingaadmin  
root@ce705e592611:/# exit  
Cleaning things up and making permanent
$ docker stop icinga2 && docker stop icinga2-mysql
icinga2  
icinga2-mysql  
$ cp -a /tmp/datadir ~/docker-icinga2.datadir
$ echo "~/docker-icinga2.datadir" > ./DATADIR
$ docker start icinga2-mysql && rm cid && docker rm icinga2 && \
  make runprod
icinga2-mysql  
icinga2  
chmod 777 /tmp/tmp.08c34zjRMpDOCKERTMP  
d34d56258d50957492560f481093525795d547a1c8fc985e178b2a29b313d47a  

Now you should be able to access the IcingaWeb2 web interface on http://localhost:4080/icingaweb2 and the Icinga Classic UI web interface at http://localhost:4080/icinga2-classicui.

For further information about this Docker setup please consult the documentation written by Joshua Cox who has worked on this project. For information about Icinga2 itself, please have a look into the Icinga2 Documentation.

Michal Čihař: Weekly phpMyAdmin contributions 16/2

19 January, 2016 - 12:00

Last week was mostly focused on refactoring. I've completely rewritten user interface language selection and related metadata handling. The code is object based and fully covered by testsuite, what was impossible with previous one.

Besides that, there was usual amount of bug fixes and few improvements to the phpMyAdmin container for Docker.

All handled issues:

Filed under: English phpMyAdmin | 0 comments

Norbert Preining: Debian/TeX Live 2015.20160117-1 and biber 2.3-1

19 January, 2016 - 05:25

About one month has passed and here is the usual updated of TeX Live packages for Debian, this time also with an update to biber to accompany the updated version of biblatex. Nothing spectacular here besides fixes for some broken links of fonts.

As a bonus this time, I provide (auto-generated) links to the packages on CTAN, so that one can check the package descriptions and manuals.

Updated packages

academicons, acro, apnum, babel, babel-french, babel-icelandic, babel-spanish, babel-vietnamese, bclogo, biblatex, bibtexperllibs, calxxxx-yyyy, chemformula, chemgreek, chess-problem-diagrams, chickenize, cjk-gs-integrate, cmcyr, csplain, datatool, dvips, embrac, enotez, etex-pkg, fibeamer, fithesis, invoice, isodoc, l3kernel, latexdiff, luamplib, mathastext, mcf2graph, media9, nameauth, newpx, newtx, newtxsf, nucleardata, paralist, pdftex, pgfplots, pict2e, pmxchords, prftree, ptex, reledmac, schwalbe-chess, siunitx, tempora, tetex, tex4ht, texinfo, texlive-cz, texlive-docindex, texlive-scripts, thalie, thuthesis, tkz-orm, uantwerpendocs, unicode-data, xassoccnt, xespotcolor.

New packages

continue, ecobiblatex, econometrics, getitems, librebodoni, lshort-estonian, moodle, nimbus15, seuthesix, uantwerpendocs.

Enjoy.

David Pashley: NullPointerExceptions in Xerces-J

18 January, 2016 - 21:40

Xerces is an XML library for several languages, but if a very common library in Java. 

I recently came across a problem with code intermittently throwing a NullPointerException inside the library:

java.lang.NullPointerException
        at org.apache.xerces.dom.ParentNode.nodeListItem(Unknown Source)
        at org.apache.xerces.dom.ParentNode.item(Unknown Source)
        at com.example.xml.Element.getChildren(Element.java:377)
        at com.example.xml.Element.newChildElementHelper(Element.java:229)
        at com.example.xml.Element.newChildElement(Element.java:180)
        …
 
You may also find the NullPointerException in ParentNode.nodeListGetLength() and other locations in ParentNode.

Debugging this was not helped by the fact that the xercesImpl.jar is stripped of line numbers, so I couldn’t find the exact issue. After some searching, it appeared that the issue was down to the fact that Xerces is not thread-safe. ParentNode caches iterations through the NodeList of children to speed up performance and stores them in the Node’s Document object. In multi-threaded applications, this can lead to race conditions and NullPointerExceptions.  And because it’s a threading issue, the problem is intermittent and hard to track down.

The solution is to synchronise your code on the DOM, and this means the Document object, everywhere you access the nodes. I’m not certain exactly which methods need to be protected, but I believe it needs to be at least any function that will iterate a NodeList. I would start by protecting every access and testing performance, and removing some if needed.

/**
 * Returns the concatenation of all the text in all child nodes
 * of the current element.
 */
public String getText() {
StringBuilder result = new StringBuilder();
 
synchronized ( m_element.getOwnerDocument()) {
NodeList nl = m_element.getChildNodes();
for (int i = 0; i < nl.getLength(); i++) {
Node n = nl.item(i);
 
if (n != null && n.getNodeType() == org.w3c.dom.Node.TEXT_NODE) {
result.append(((CharacterData) n).getData());
}
}
}
 
return result.toString();
}
Notice the “synchronized ( m_element.getOwnerDocument()) {}” block around the section that deals with the DOM. The NPE would normally be thrown on the nl.getLength() or nl.item() calls.

Since putting in the synchronized blocks, we’ve gone from having 78 NPEs between 2:30am and 3:00am, to having zero in the last 12 hours, so I think it’s safe to say, this has drastically reduced the problem. 

The post NullPointerExceptions in Xerces-J appeared first on David Pashley.com.

Russ Allbery: wallet 1.3

18 January, 2016 - 11:40

It's been over a year since the last release of the wallet, a system for storing and retrieving secure credentials (currently relying on Kerberos authentication). There were a ton of pending changes, mostly thanks to work from Jon Robertson and Bill MacAllister.

I'm still really itching to rewrite all of this code, which is also part of why I haven't uploaded packages to Debian proper yet. I no longer like the way that I designed it, particularly in the Perl modules used by the server side, and want to rewrite it rather substantially. Thankfully, I'm starting to use it for work again, although only as a supplement to another in-house key management system. I might just barely be able to justify investing some effort in that as part of my job. We'll see. In the meantime, it feels awkward and clunky to work with, which makes me itch when I'm preparing new releases.

In any event, this release adds preliminary support for using Active Directory as a backend for Kerberos keytabs, and adds both nested (ACLs that are groups of other ACLs) and external (run an external command to make authorization decisions) ACLs. It also adds a root instance variant of ldap-attr, and a new object type: password, which will automatically generate a password if one wasn't already stored.

There are a few new wallet commands: update, which will always change the content of an object even if marked unchanging, and acl replace, which will replace all instances of an ACL as an owner field with some other ACL. There are also multiple new wallet reports, and various bug fixes to how ACLs are displayed.

You can get the latest version from the wallet distribution page.

James McCoy: neovim-coming-to-debian

18 January, 2016 - 08:44

Almost 9 months after I took ownership of the Neovim RFP, I finally tagged & uploaded Neovim to Debian. It still has to go through the NEW queue, but it will soon be in an experimental release near you.

I'm holding off uploading it to unstable for the time being for a couple reasons.

  • It depends on a few libraries which have yet to see a stable release.
  • There still needs to be some thought about how to integrate it with the Vim ecosystem in Debian.

Many thanks to Jason Pleau for working on getting the necessary parts of the Lua stack needed to support Neovim into Debian.

Russ Allbery: rra-c-util 5.10

18 January, 2016 - 07:18

Despite the name of the package, most of the changes in this release are actually to the Perl test infrastructure.

I decided to finally standardize the versions of the modules embedded in wallet, but discovered the need to add an exclusion list so that I don't have to change the version of the schema module. (That currently drives database schema upgrades.) While doing that, I rediscovered that I have two versions of the module version check that shared a ton of code, so they've now been refactored into a module (and then debugged again, since I broke various things about the Automake integration).

This release also fixes use of UNIX-specific path delimiters in my standard Perl docs/synopsis.t test, which fixed some failing tests in podlators.

I would have been done with this somewhat sooner, but the Travis-CI tests for rra-c-util started failing in the IPv6 server test, and it took a lot of debugging to figure out why. It turned out that the environment allows creation of IPv6 sockets but not connecting to them, and my test for whether IPv6 was working didn't account for that. Now it does, so those tests are properly skipped when IPv6 is half-configured.

You can get the latest version from the rra-c-util distribution page.

Lunar: Reproducible builds: week 38 in Stretch cycle

18 January, 2016 - 05:06

What happened in the reproducible builds effort between January 10th and January 16th:

Toolchain fixes

Benjamin Drung uploaded mozilla-devscripts/0.43 which sorts the file list in preferences files. Original patch by Reiner Herrmann.

Lunar submitted an updated patch series to make timestamps in packages created by dpkg deterministic. To ensure that the mtimes in data.tar are reproducible, with the patches, dpkg-deb uses the --clamp-mtime option added in tar/1.28-1 when available. An updated package has been uploaded to the experimental repository. This removed the need for a modified debhelper as all required changes for reproducibility have been merged or are now covered by dpkg.

Packages fixed

The following packages have become reproducible due to changes in their build dependencies: angband-doc, bible-kjv, cgoban, gnugo, pachi, wmpuzzle, wmweather, wmwork, xfaces, xnecview, xscavenger, xtrlock, virt-top.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues, but not all of them:

Untested changes:

reproducible.debian.net

Once again, Vagrant Cascadian is providing another armhf build system, allowing to run 6 more armhf builder jobs, right there. (h01ger)

Stop requiring a modified debhelper and adapt to the latest dpkg experimental version by providing a predetermined identifier for the .buildinfo filename. (Mattia Rizzolo, h01ger)

New X.509 certificates were set up for jenkins.debian.net and reproducible.debian.net using Let's Encrypt!. Thanks to GlobalSign for providing certificates for the last year free of charge. (h01ger)

Package reviews

131 reviews have been removed, 85 added and 32 updated in the previous week.

FTBFS issues filled: 29. Thanks to Chris Lamb, Mattia Rizzolo, and Niko Tyni.

New issue identified: timestamps_in_manpages_added_by_golang_cobra.

Misc.

Most of the minutes from the meetings held in Athens in December 2015 are now available to the public.

Elena 'valhalla' Grandi: BDFSM

18 January, 2016 - 04:11
BDFSM

Enrico Zini http://www.enricozini.org/ coined the BDSM Free Software Manifesto http://www.enricozini.org/2013/debian/notes-from-that-other-lightning-talk-session2/ (formerly Definition, which however isn't as precise as a description and more importantly doesn't fit in a cool geeky acronym):

I refuse to be bound by software I cannot negotiate with.

This begged to be turned into a cross-stitch wall hanging. I couldn't refuse.

http://social.gl-como.it/photos/valhalla/image/75f28c7ef6c17319f09b858b882bca49

More informations and context for the phrase can be found in the notes for Enrico's talk http://www.enricozini.org/2015/debian/standup-comedy-notes/ at DebConf 2015: "Enrico's Semi Serious Stand-up Comedy". Note that while fully textual, the topics may be considered not really SFW, and some of the links definitely aren't. It also includes many insights into the nature of collaboration and Free Software Communities, so I'd recommend reading it (and watching the video recording of the talk) anyway.

I've finally also published the pattern http://www.trueelena.org/computers/projects/bdfsm.html on my website:

* The image I've used while embroidering http://www.trueelena.org/computers/projects/bdfsm/bdfsm-pattern.png
* kxstitch project http://www.trueelena.org/computers/projects/bdfsm.kxs (converted now that kxstitch is back into Debian)
* kxstitch generated PDF http://www.trueelena.org/computers/projects/bdfsm-pattern.pdf

Lars Wirzenius: Obnam survey (January 2016)

17 January, 2016 - 18:06

Survey URL: http://goo.gl/forms/hdoQZKjs80

I am doing an Obnam survey. The goal of this survey is to collect feedback from those who use Obnam, or have tried it, to guide the project in the future.

The survey will run until February 29, 2016.

Goals:

  • Get a feel for the number of people using Obnam, and how they are using it.
  • Find out why those who've tried Obnam have chosen to not use it.
  • Get input on roadmap planning: what things are wanted most, or least. What is important for Obnam users?
  • Get feedback on what's good or bad about Obnam in general.
  • Get feedback about the project in addition to the software.
  • Get a feel for whether it's worth pursuing business opportunities around Obnam.

All questions in this survey are optional. I do not collect personal information at all. The survey is implemented using Google Forms, and so Google probably collects some information; sorry. You don't need to log in to Google to fill in the survey, though, and I encourage you to use all the privacy protection tools you have.

I hope as many Obnam users as possible fill in the survey.

Hideki Yamane: RE: How about "grooming" outdated packages?

17 January, 2016 - 18:01
I've investigated some packages to check a bit, patched to it and rebuilt package. However, lintian prevent me to generate it since it has lintian error.

E: xxxx: maintainer-script-should-not-use-adduser-system-without-home postinst:23 'adduser --system --quiet --ingroup xxxx --no-create-home xxxx'
Well, it is popular package, maintainer is well experienced, however it _still_ has such error :-( 
I wonder if some people do grooming for such packages... like security inspection.

Norbert Preining: Gaming: Portal Stories: Mel

17 January, 2016 - 07:02

The last days, after about 20 hours of gameplay, I have finally finished Portal Stories: Mel, the single player mod for Portal 2. After having played Portal and Portal 2, I have to say this one mod took be by surprise. I remember very well that, after having played through Portal 2 and found it to easy, I was pointed to Portal Stories: Mel in several comments, I thought “Well, it cannot be that difficult!”. I couldn’t have been more wrong!

Portal Stories: Mel is really difficult, at least for me. I remember game sessions where I just started the game, looked at the configuration of the current riddle for 20min, and then quit the game without actually moving around an inch. This kind of logical riddles, paired with a certain element of action and speed, is exactly what I am enjoying.

It took me quite some time, and I have to admit, I had to cheat at some points during the game being stuck completely without an idea. Only the finish was a bit surprising – surprisingly simple. The big boss at the end was not really worth it. Still, leaving the testing facilities at the end gave me some kind or relieve

Considering that Portal Stories: Mel is something like a fan-art, developed by Prism Studios, a group made up of eight fans of the Portal games, this is a fantastic achievement. I really loved it, and I am deeply impressed by the ingenuity of the creators. Thanks!

Russ Allbery: podlators 4.05

17 January, 2016 - 05:12

Getting all the details right in a highly portable Perl core module that tries to support very old versions of Perl is tricky! And I clearly didn't do a good job of documenting previous decisions.

This release reintroduces pod2man.PL and pod2text.PL generator scripts to get the right Perl invocation. I thought ExtUtils::MakeMaker and Module::Build now took care of this, but apparently they only take care of this on UNIX platforms, not on the non-UNIX platforms that require special execution logic.

Thanks to a patch by Niko Tyni, this version of Pod::Man also falls back to non-utf8 behavior if the utf8 option is specified but the Encode module doesn't exist. This can help with some cross-build situations.

I also finally figured out the problem with occasional test failures on random platforms: I was trying to clean up the temporary directory used by tests after each test, but the CPAN test systems run all the tests in parallel, so the tests were racing with each other. This release just leaves the temporary directory around and deletes it in make clean.

You can get the latest version from the podlators distribution page.

Mike Hommey: Announcing git-cinnabar 0.3.1

16 January, 2016 - 18:26

This is a brown paper bag release. It turns out I managed to break the upgrade
path only 10 commits before the release.

What’s new since 0.3.0?
  • git cinnabar fsck doesn’t fail to upgrade metadata.
  • The remote.$remote.cinnabar-draft config works again.
  • Don’t fail to clone an empty repository.
  • Allow to specify mercurial configuration items in a .git/hgrc file.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้