Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 2 hours 6 min ago

Junichi Uekawa: Playing with appengine python ndb.

1 August, 2017 - 07:24
Playing with appengine python ndb. Some interfaces changed from the old interface and I don't think I quite got the hang of it. The last time I was actively using it is when I was doing tokyodebian reservation system that I haven't touched for 3 years and started 8 years ago.

Jonathan McDowell: How to make a keyring

1 August, 2017 - 03:17

Every month or two keyring-maint gets a comment about how a key update we say we’ve performed hasn’t actually made it to the active keyring, or a query about why the keyring is so out of date, or told that although a key has been sent to the HKP interface and that is showing the update as received it isn’t working when trying to upload to the Debian archive. It’s frustrating to have to deal with these queries, but the confusion is understandable. There are multiple public interfaces to the Debian keyrings and they’re not all equal. This post attempts to explain the interactions between them, and how I go about working with them as part of the keyring-maint team.

First, a diagram to show the different interfaces to the keyring and how they connect to each other:

Public interfaces rsync: keyring.debian.org::keyrings

This is the most important public interface; it’s the one that the Debian infrastructure uses. It’s the canonical location of the active set of Debian keyrings and is what you should be using if you want the most up to date copy. The validity of the keyrings can be checked using the included sha512sums.txt file, which will be signed by whoever in keyring-maint did the last keyring update.

HKP interface: hkp://keyring.debian.org/

What you talk to with gpg --keyserver keyring.debian.org. Serves out the current keyrings, and accepts updates to any key it already knows about (allowing, for example, expiry updates, new subkeys + uids or new signatures without the need to file a ticket in RT or otherwise explicitly request it). Updates sent to this interface will be available via it within a few hours, but must be manually folded into the active keyring. This in general happens about once a month when preparing for a general update of the keyring; for example b490c1d5f075951e80b22641b2a133c725adaab8.

Why not do this automatically? Even though the site uses GnuPG to verify incoming updates there are still occasions we’ve seen bugs (such as #787046, where GnuPG would always import subkeys it didn’t understand, even when that subkey was already present). Also we don’t want to allow just any UID to be part of the keyring. It is thus useful to retain a final set of human based sanity checking for any update before it becomes part of the keyring proper.

Alioth/anonscm: https://anonscm.debian.org/git/keyring/keyring/

A public mirror of the git repository the keyring-maint team use to maintain the keyring. Every action is recorded here, and in general each commit should be a single action (such as adding a new key, doing a key replacement or moving a key between keyrings). Note that pulling in the updates sent via HKP count as a single action, rather than having a commit per key updated. This mirror is updated whenever a new keyring is made active (i.e. made available via the rsync interface). Until that point pending changes are kept private; we sometimes deal with information such as the fact someone has potentially had a key compromised that we don’t want to be public until we’ve actually disabled it. Every “keyring push” (as we refer to the process of making a new keyring active) is tagged with the date it was performed. Releases are also tagged with their codenames, to make it easy to do comparisons over time.

Debian archive

This is actually the least important public interface to the keyring, at least from the perspective of the keyring-maint team. No infrastructure makes use of it and while it’s mostly updated when a new keyring is made active we only make a concerted effort to do so when it is coming up to release. It’s provided as a convenience package rather than something which should be utilised for active verification of which keys are and aren’t currently part of the keyring.

Team interface Master repository: kaufmann.debian.org:/srv/keyring.debian.org/master-keyring.git

The master git repository for keyring maintenance is stored on kaufmann.debian.org AKA keyring.debian.org. This system is centrally managed by DSA, with only DSA and keyring-maint having login rights to it. None of the actual maintenance work takes place here; it is a bare repo providing a central point for the members of keyring-maint to collaborate around.

Private interface Private working clone

This is where all of the actual keyring work happens. I have a local clone of the repository from kaufmann on a personal machine. The key additions / changes I perform all happen here, and are then pushed to the master repository so that they’re visible to the rest of the team. When preparing to make a new keyring active the changes that have been sent to the HKP interface are copied from kaufmann via scp and folded in using the pull-updates script. The tree is assembled into keyrings with a simple make and some sanity tests performed using make test. If these are successful the sha512sums.txt file is signed using gpg --clearsign and the output copied over to kaufmann. update-keyrings is then called to update the active keyrings (both rsync + HKP). A git push public pushes the changes to the public repository on anonscm. Finally gbp buildpackage --git-builder='sbuild -d sid' tells git-buildpackage to use sbuild to build a package ready to be uploaded to the archive.

Hopefully that helps explain the different stages and outputs of keyring maintenance; I’m aware that it would be a good idea for this to exist somewhere on keyring.debian.org as well and will look at doing so.

Daniel Silverstone: F/LOSS activity, July 2017

1 August, 2017 - 02:10

Once again, my focus was on Gitano, which we're working toward a 1.1 for. We had another one of our Gitano developer days which was attended by Richard maw and myself. You are invited to read the wiki page but a summary of what happened, which directly involved me, is:

  • Once again, we reviewed our current task state
  • We had a good discussion about our code of conduct including adopting a small change from upstream to improve matters
  • I worked on, and submitted a patch for, improving nested error message reports in Lace.
  • I reviewed and merged some work from Richard about pattern centralisation
  • I responded to comments on a number of in-flight series Richard had reviewed for me.
  • We discussed our plans for 1.1 and agreed that we'll be skipping a developer day in August because so much of it is consumed by DebConf and so on.

Other than that, related to Gitano during July I:

  • Submitted some code series before the developer day covering Gall cleanups and hook support in Gitano.
  • Reviewed and merged some more Makefile updates from Richard Ipsum
  • Reviewed and merged a Supple fix for environment cleardown from Richard Ipsum
  • Fixed an issue in one of the Makefiles which made it harder to build with dh-lua
  • I began work in earnest on Gitano CI, preparing a lot of scripts and support to sit around Jenkins (for now) for CIing packaging etc for Gitano and Debian
  • I began work on a system branch concept for Gitano CI which will let us handle the CI of branches in the repos, even if they cross repos.

I don't think I've done much non-Gitano F/LOSS work in July, but I am now in Montréal for debconf 2017 so hopefully more to say next month.

Chris Lamb: Free software activities in July 2017

1 August, 2017 - 00:35

Here is my monthly update covering what I have been doing in the free software world during July 2017 (previous month):

  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Moved the default mirror from ftp.de.debian.org to deb.debian.org. []
    • Create a sensible debian/changelog file if one does not exist. []
  • Updated django-slack, my library to easily post messages to the Slack group-messaging utility:
    • Merged a PR to clarify the error message when a channel could not be found. []
    • Reviewed and merged a suggestion to add a TestBackend. []
  • Added Pascal support to Louis Taylor's anyprint hack to add support for "print" statements from other languages into Python. []
  • Filed a PR against Julien Danjou's daiquiri Python logging helper to clarify an issue in the documentation. []
  • Merged a PR to Strava Enhancement Suite — my Chrome extension that improves and fixes annoyances in the web interface of the Strava cycling and running tracker — to remove Zwift activities with maps. []
  • Submitted a pull request for Redis key-value database store to fix a spelling mistake in a binary. []
  • Sent patches upstream to the authors of the OpenSVC cloud engine and the Argyll Color Management System to fix some "1204" typos.
  • Fixed a number of Python and deployment issues in my stravabot IRC bot. []
  • Correct a "1204" typo in Facebook's RocksDB key-value store. []
  • Corrected =+ typos in the Calibre e-book reader software. []
  • Filed a PR against the diaspy Python interface to the DIASPORA social network to correct the number of seconds in a day. []
  • Sent a pull request to remedy a =+ typo in sparqlwrapper, a SPARQL endpoint interface for Python. []
  • Filed a PR against Postfix Admin to fix some =+ typos. []
  • Fixed a "1042" typo in ImageJ, a Java image processing library. []
  • On a less-serious note, I filed an issue for Brad Abraham's bot for the Reddit sub-reddit to add some missing "hit the gym" advice. []

I also blogged about my recent lintian hacking and installation-birthday package.

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

(I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.)

This month I:

  • Assisted Mattia with a draft of an extensive status update to the debian-devel-announce mailing list. There were interesting follow-up discussions on Hacker News and Reddit.
  • Submitted the following patches to fix reproducibility-related toolchain issues within Debian:
  • I also submitted 5 patches to fix specific reproducibility issues in autopep8, castle-game-engine, grep, libcdio & tinymux.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#114 #115, #116 & #117)

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • comparators.xml:
    • Fix EPUB "missing file" tests; they ship a META-INF/container.xml file. []
    • Misc style fixups. []
  • APK files can also be identified as "DOS/MBR boot sector". (#868486)
  • comparators.sqlite: Simplify file detection by rewriting manual recognizes call with a Sqlite3Database.RE_FILE_TYPE definition. []
  • comparators.directory:
    • Revert the removal of a try-except. (#868534)
    • Tidy module. []

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add missing File::Temp imports in the JAR and PNG handlers. This appears to have been exposed by lazily-loading handlers in #867982. (#868077)

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Avoid a race condition between check-and-creation of Buildinfo instances. []


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL emails to the debian-devel-announce mailing list.

Patches contributed
  • obs-studio: Remove annoying "click wrapper" on first startup. (#867756)
  • vim: Syntax highlighting for debian/copyright files. (#869965)
  • moin: Incorrect timezone offset applied due to "84600" typo. (#868463)
  • ssss: Add a simple autopkgtest. (#869645)
  • dch: Please bump $latest_bpo_dist to current stable release. (#867662)
  • python-kaitaistruct: Remove Markdown and homepage references from package long descriptions. (#869265)
  • album-data: Correct invalid Vcs-Git URI. (#869822)
  • pytest-sourceorder: Update Homepage field. (#869125)

I also made a very large number of contributions to the Lintian static analysis tool. To avoid duplication here, I have outlined them in a separate post.

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1014-1 for libclamunrar, a library to add unrar support to the Clam anti-virus software to fix an arbitrary code execution vulnerability.
  • Issued DLA 1015-1 for the libgcrypt11 crypto library to fix a "sliding windows" information leak.
  • Issued DLA 1016-1 for radare2 (a reverse-engineering framework) to prevent a remote denial-of-service attack.
  • Issued DLA 1017-1 to fix a heap-based buffer over-read in the mpg123 audio library.
  • Issued DLA 1018-1 for the sqlite3 database engine to prevent a vulnerability that could be exploited via a specially-crafted database file.
  • Issued DLA 1019-1 to patch a cross-site scripting (XSS) exploit in phpldapadmin, a web-based interface for administering LDAP servers.
  • Issued DLA 1024-1 to prevent an information leak in nginx via a specially-crafted HTTP range.
  • Issued DLA 1028-1 for apache2 to prevent the leakage of potentially confidential information via providing Authorization Digest headers.
  • Issued DLA 1033-1 for the memcached in-memory object caching server to prevent a remote denial-of-service attack.
Uploads
  • redis:
    • 4:4.0.0-1 — Upload new major upstream release to unstable.
    • 4:4.0.0-2 — Make /usr/bin/redis-server in the primary package a symlink to /usr/bin/redis-check-rdb in the redis-tools package to prevent duplicate debug symbols that result in a package file collision. (#868551)
    • 4:4.0.0-3 — Add -latomic to LDFLAGS to avoid a FTBFS on the mips & mipsel architectures.
    • 4:4.0.1-1 — New upstream version. Install 00-RELEASENOTES as the upstream changelog.
    • 4:4.0.1-2 — Skip non-deterministic tests that rely on timing. (#857855)
  • python-django:
    • 1:1.11.3-1 — New upstream bugfix release. Check DEB_BUILD_PROFILES consistently, not DEB_BUILD_OPTIONS.
  • bfs:
    • 1.0.2-2 & 1.0.2-3 — Use help2man to generate a manpage.
    • 1.0.2-4 — Set hardening=+all for bindnow, etc.
    • 1.0.2-5 & 1.0.2-6 — Don't use upstream's release target as it overrides our CFLAGS & install RELEASES.md as the upstream changelog.
    • 1.1-1 — New upstream release.
  • libfiu:
    • 0.95-4 — Apply patch from Steve Langasek to fix autopkgtests. (#869709)
  • python-daiquiri:
    • 1.0.1-1 — Initial upload. (ITP)
    • 1.1.0-1 — New upstream release.
    • 1.1.0-2 — Tidy package long description.
    • 1.2.1-1 — New upstream release.

I also reviewed and sponsored the uploads of gtts-token 1.1.1-1 and nlopt 2.4.2+dfsg-3.

Debian bugs filed
  • ITP: python-daiquiri — Python library to easily setup basic logging functionality. (#867322)
  • twittering-mode: Correct incorrect time formatting due to "84600" typo. (#868479)
FTP Team

As a Debian FTP assistant I ACCEPTed 45 packages: 2ping, behave, cmake-extras, cockpit, cppunit1.13, curvedns, flask-mongoengine, fparser, gnome-shell-extension-dash-to-panel, graphene, gtts-token, hamlib, hashcat-meta, haskell-alsa-mixer, haskell-floatinghex, haskell-hashable-time, haskell-integer-logarithms, haskell-murmur-hash, haskell-quickcheck-text, haskell-th-abstraction, haskell-uri-bytestring, highlight.js, hoel, libdrm, libhtp, libpgplot-perl, linux, magithub, meson-mode, orcania, pg-dirtyread, prometheus-apache-exporter, pyee, pytest-pep8, python-coverage-test-runner, python-digitalocean, python-django-imagekit, python-rtmidi, python-transitions, qdirstat, redtick, ulfius, weresync, yder & zktop.

I additionally filed 5 RC bugs against packages that had incomplete debian/copyright files against: cockpit, cppunit, cppunit1.13, curvedns & highlight.js.

Russell Coker: Running a Tor Relay

31 July, 2017 - 19:48

I previously wrote about running my SE Linux Play Machine over Tor [1] which involved configuring ssh to use Tor.

Since then I have installed a Tor hidden service for ssh on many systems I run for clients. The reason is that it is fairly common for them to allow a server to get a new IP address by DHCP or accidentally set their firewall to deny inbound connections. Without some sort of VPN this results in difficult phone calls talking non-technical people through the process of setting up a tunnel or discovering an IP address. While I can run my own VPN for them I don’t want their infrastructure tied to mine and they don’t want to pay for a 3rd party VPN service. Tor provides a free VPN service and works really well for this purpose.

As I believe in giving back to the community I decided to run my own Tor relay. I have no plans to ever run a Tor Exit Node because that involves more legal problems than I am willing or able to deal with. A good overview of how Tor works is the EFF page about it [2]. The main point of a “Middle Relay” (or just “Relay”) is that it only sends and receives encrypted data from other systems. As the Relay software (and the sysadmin if they choose to examine traffic) only sees encrypted data without any knowledge of the source or final destination the legal risk is negligible.

Running a Tor relay is quite easy to do. The Tor project has a document on running relays [3], which basically involves changing 4 lines in the torrc file and restarting Tor.

If you are running on Debian you should install the package tor-geoipdb to allow Tor to determine where connections come from (and to not whinge in the log files).

ORPort [IPV6ADDR]:9001

If you want to use IPv6 then you need a line like the above with IPV6ADDR replaced by the address you want to use. Currently Tor only supports IPv6 for connections between Tor servers and only for the data transfer not the directory services.

Data Transfer

I currently have 2 systems running as Tor relays, both of them are well connected in a European DC and they are each transferring about 10GB of data per day which isn’t a lot by server standards. I don’t know if there is a sufficient number of relays around the world that the share of the load is small or if there is some geographic dispersion algorithm which determined that there are too many relays in operation in that region.

Related posts:

  1. RPC and SE Linux One ongoing problem with TCP networking is the combination of...
  2. The Most Important things for running a Reliable Internet Service One of my clients is currently investigating new hosting arrangements....
  3. how to run dynamic ssh tunnels service smtps { disable = no socket_type = stream wait...

Norbert Preining: Calibre in Debian

31 July, 2017 - 09:44

Some news about Calibre in Debian: I have been added to the list of maintainers, thanks Martin, and the recent release of Calibre 3.4 into Debian/unstable brought some fixes concerning the desktop integration. Now I am working on Calibre 3.5.

Calibre 3.5 separates out one module, html5-parser, into a separate package which needs to be included into Debian first. I have prepared and uploaded a version, but NEW processing will keep this package from entering Debian for a while. Other things I am currently doing is going over the list of bugs and try to fix or close as many as possible.

Finally the endless Rar support story still continues. I still don’t have any response from the maintainer of unrar-nonfree in Debian, so I am contemplating to package my own version of libunrar. As I wrote in the previous post, Calibre now checks whether the Python module unrardll is available, and if, uses it to decode rar-packed ebooks. I have a package for unrardll ready to be uploaded, but it needs the shared library version, and thus I am stuck waiting for unrar-nonfree.

Anyway, to help all those wanting to play with the latest Calibre, my archive nowadays contains:

  • preliminary packages for Calibre 3.5
  • updated package for unrar-nonfree that ship the shared library
  • the new pacakge unrardll to be uploaded after unrar-nonfree is updated
  • the new package html5-parser which is in NEW

Together these packages provide the newest Calibre with Rar support.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

Jose M. Calhariz: Crossgrading a complex Desktop and Debian Developer machine running Debian 9, for real.

30 July, 2017 - 14:40

After sometime without looking into this problem, I decided to do another try. I do not found a way to do a complex crossgrade of my desktop without massively removing packages. And there are bug and bug that require to edit the config scripts of the packages.

So here is my another try to do a crossgrade of my desktop, this time for real.

apt-get update
apt-get upgrade
apt-get autoremove
apt-get clean
dpkg --list > original.dpkg
apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
for pack32 in $(grep i386 original.dpkg | egrep "^ii " | awk '{print $2}' ) ; do 
  echo $pack32 ; 
  apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
done
cd /var/cache/apt/archives/
dpkg --install libacl1_*amd64.deb libattr1_*_amd64.deb libapt-pkg5.0_*amd64.deb libbz2-1.0_*amd64.deb dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb 
dpkg --install --skip-same-version *.deb
dpkg --configure --pending
dpkg --install --skip-same-version *.deb
dpkg --remove libcurl4-openssl-dev:i386
dpkg --configure --pending
dpkg --remove libkdesu5 kde-runtime
apt-get --fix-broken install
apt-get install  $(egrep "^ii"  ~/original.dpkg | grep -v ":i386" | grep -v "all" | grep -v "aiccu" | grep -v "acroread" | grep -v "flashplayer-mozilla" | grep -v "flash-player-properties" | awk '{print $2}')

Reboot.

Then the system failed to boot, missing lvm2 package.

Boot with a live CD.

sudo -i
mount /dev/sdc2         /mnt
mount /dev/vg100/usr /mnt/usr
mount /dev/vg100/var /mnt/var
mount -o bind /proc    /mnt/proc
mount -o bind /sys     /mnt/sys
mount -o bind /dev/    /mnt/dev
mount -o bind /dev/pts  /mnt/dev/pts
chroot /mnt /bin/su -
apt-get install lvm2
exit
reboot

Still somethings do not work, like command fakeroot.

for pack32 in $(grep i386 original.dpkg  | egrep "^ii " | awk '{print $2}' ) ; do     
  echo $pack32 ;     
  if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then       
    apt-get -y install ${pack32%:i386}:amd64 ;     
  fi ;   
done

for pack32 in $(grep i386 original.dpkg  | egrep "^ii " | awk '{print $2}' ) ; do     
  echo $pack32 ;     
  apt-get -y install ${pack32%:i386}:amd64 ;     
done

Now is time to find what still does not work and how to solve it.

Niels Thykier: Introducing the debhelper buildlabel prototype for multi-building packages

30 July, 2017 - 13:50

For most packages, the “dh” short-hand rules (possibly with a few overrides) work great.  It can often auto-detect the buildsystem and handle all the trivial parts.

With one notably exception: What if you need to compile the upstream code twice (or more) with different flags?  This is the case for all source packages building both regular debs and udebs.

In that case, you would previously need to override about 5-6 helpers for this to work at all.  The five dh_auto_* helpers and usually also dh_install (to call it with different –sourcedir for different packages).  This gets even more complex if you want to support Build-Profiles such as “noudeb” and “nodoc”.

The best way to support “nodoc” in debhelper is to move documentation out of dh_install’s config files and use dh_installman, dh_installdocs, and dh_installexamples instead (NB: wait for compat 11 before doing this).  This in turn will mean more overrides with –sourcedir and -p/-N.

And then there is “noudeb”, which currently requires manual handling in debian/rules.  Basically, you need to use make or shell if-statements to conditionally skip the udeb part of the builds.

All of this is needlessly complex.

Improving the situation

In an attempt to make things better, I have made a new prototype feature in debhelper called “buildlabels” in experimental.  The current prototype is designed to deal with part (but not all) of the above problems:

  • It will remove the need for littering your rules file for supporting “noudeb” (and in some cases also other “noX” profiles).
  • It will remove the need for overriding the dh_install* tools just to juggle with –sourcedir and -p/-N.

However, it currently not solve the need for overriding the dh_auto_* tools and I am not sure when/if it will.

The feature relies on being able to relate packages to a given series of calls to dh_auto_*.  In the following example, I will use udebs for the secondary build.  However, this feature is not tied to udebs in any way and can be used any source package that needs to do two or more upstream builds for different packages.

Assume our example source builds the following binary packages:

  • foo
  • libfoo1
  • libfoo-dev
  • foo-udeb
  • libfoo1-udeb

And in the rules file, we would have something like:

[...]

override_dh_auto_configure:
    dh_auto_configure -B build-deb -- --with-feature1 --with-feature2
    dh_auto_configure -B build-udeb -- --without-feature1 --without-feature2

[...]

What is somewhat obvious to a human is that the first configure line is related to the regular debs and the second configure line is for the udebs.  However, debhelper does not know how to infer this and this is where buildlabels come in.  With buildlabels, you can let debhelper know which packages and builds that belong together.

How to use buildlabels

To use buildlabels, you have to do three things:

  1. Pick a reasonable label name for the secondary build.  In the example, I will use “udeb”.
  2. Add “–buildlabel=$LABEL” to all dh_auto_* calls related to your secondary build.
  3. Tag all packages related to “my-label” with “X-DH-Buildlabel: $LABEL” in debian/control.  (For udeb packages, you may want to add “Build-Profiles: <!noudeb>” while you are at it).

For the example package, we would change the debian/rules snippet to:

[...]

override_dh_auto_configure:
    dh_auto_configure -B build-deb -- --with-feature1 --with-feature2
    dh_auto_configure --buildlabel=udeb -B build-udeb -- --without-feature1 --without-feature2

[...]

(Remember to update *all* calls to dh_auto_* helpers; the above only lists dh_auto_configure to keep the example short.)  And then add “X-DH-Buildlabel: udeb” in the stanzas for foo-udeb + libfoo1-udeb.

With those two minor changes:

  • debhelper will skip the calls to dh_auto_* with –buildlabel=udeb if the udeb packages are skipped.
  • dh_auto_install will automatically pick a separate destination directory by default for the udeb build (assuming you do not explicitly override it with –destdir).
  • dh_install will now automatically pick up files from the destination directory.that dh_auto_install used for the given package (even if you overwrote it with –destdir).  Note that you have to remove any use of “–sourcedir” first as this disables the auto-detection.  This also works for other dh_install* tools supporting –sourcedir in compat 11 or later.
Real example

Thanks to Michael Biebl, I was able to make an branch in the systemd git repository to play with this feature.  Therefore I have an real example to use as a show case.  The gist of it is in the following three commits:

Full branch can be seen at: https://anonscm.debian.org/git/pkg-systemd/systemd.git/log/?h=wip-dh-prototype-smarter-multi-builds

Request for comments / call for testing

This prototype is now in experimental (debhelper/10.7+exp.buildlabels) and you are very welcome to take it for a spin.  Please let me know if you find the idea useful and feel free to file bugs or feature requests.  If deemed useful, I will merge into master and include in a future release.

If you have any questions or comments about the feature or need help with trying it out, you are also very welcome to mail the debhelper-devel mailing list.

Known issues / the fine print:
  • It is experimental feature and may change without notice.
  • The current prototype may break existing packages as it is not guarded by a compat bump to ease your testing.  I am still very curious to hear about any issues you may experience.
  • The default build directory is re-used even with different buildlabels, so you still need to use explicit build dirs for buildsystems that prefer building in a separate directory (e.g. meson).
  • udebs are not automatically tagged with an “udeb” buildlabel.  This is partly by design as some source packages only build udebs (and no regular debs).  If they were automatically tagged, the existing packages would break.
  • Label names are used in path names, so you may want to refrain from using “too exciting” label names.
  • It is experimental feature and may change without notice. (Yes, I thought it deserved repeating)

Filed under: Debhelper, Debian

Antoine Beaupré: My free software activities, July 2017

30 July, 2017 - 04:16
Debian Long Term Support (LTS)

This is my monthly working on Debian LTS. This time I worked on various hairy issues surrounding ca-certificates, unattended-upgrades, apache2 regressions, libmtp, tcpdump and ipsec-tools.

ca-certificates updates

I've been working on the removal of the Wosign and StartCom certificates (Debian bug #858539) and, in general, the synchronisation of ca-certificates across suites (Debian bug #867461) since at least last march. I have made an attempt at summarizing the issue which led to a productive discussion and it seems that, in the end, the maintainer will take care of synchronizing information across suites.

Guido was right in again raising the question of synchronizing NSS across all suites (Debian bug #824872) which itself raised the other question of how to test reverse dependencies. This brings me back to Debian bug #817286 which, basically proposed the idea of having "proposed updates" for security issues. The problem is while we can upload test packages to stable proposed-updates, we can't do the same in LTS because the suite is closed and we operate only on security packages. This issue came up before in other security upload and we need to think better about how to solve this.

unattended-upgrades

Speaking of security upgrades brings me to the question of a bug (Debian bug #867169) that was filed against the wheezy version of unattended-upgrades, which showed that the package simply stopped working since the latest stable release, because wheezy became "oldoldstable". I first suggested using the "codename" but that appears to have been introduced only after wheezy.

In the end, I proposed a simple update that would fix the configuration files and uploaded this as DLA-1032-1. This is thankfully fixed in later releases and will not require such hackery when jessie becomes LTS as well.

libmtp

Next up is the work on the libmtp vulnerabilities (CVE-2017-9831 and CVE-2017-9832). As I described in my announcement, the work to backport the patch was huge, as upstream basically backported a whole library from the gphoto2 package to fix those issues (and probably many more). The lack of a test suite made it difficult to trust my own work, but given that I had no (negative) feedback, I figured it was okay to simply upload the result and that became DLA-1029-1.

tcpdump

I then looked at reproducing CVE-2017-11108, a heap overflow triggered tcpdump would parse specifically STP packets. In Debian bug #867718, I described how to reproduce the issue across all suites and opened an issue upstream, given that the upstream maintainers hadn't responded responded in weeks according to notes in the RedHat Bugzilla issue. I eventually worked on a patch which I shared upstream, but that was rejected as they were already working on it in their embargoed repository.

I can explain this confusion and duplication of work with:

  1. the original submitter didn't really contact security@tcpdump.org
  2. he did and they didn't reply, being just too busy
  3. they replied and he didn't relay that information back

I think #2 is most likely: the tcpdump.org folks are probably very busy with tons of reports like this. Still, I should probably have contacted security@tcpdump.org directly before starting my work, even though no harm was done because I didn't divulge issues that were already public.

Since then, tcpdump has released 4.9.1 which fixes the issue, but then new CVEs came out that will require more work and probably another release. People looking into this issue must be certain to coordinate with the tcpdump security team before fixing the actual issues.

ipsec-tools

Another package that didn't quite have a working solution is the ipsec-tools suite, in which the racoon daemon was vulnerable to a remotely-triggered DOS attack (CVE-2016-10396). I reviewed and fixed the upstream patch which introduced a regression. Unfortunately, there is no test suite or proof of concept to control the results.

The reality is that ipsec-tools is really old, and should maybe simply be removed from Debian, in favor of Strongswan. Upstream hasn't done a release in years and various distributions have patched up forks of those to keep it alive... I was happy, however, to know that the maintainer (noahm) will take care of managing the resulting upload with my patch in LTS and other suites, fixing that issue for now.

apache2

Finally, I was bitten back by my old DLA-841-1 upload I did all the way back in February, as it introduced a regression (Debian bug #858373) in which it was possible to segfault Apache workers with a trivial query, in certain (rather exotic, I might add) configurations (ErrorDocument 400 directive pointing to a cgid script in worker mode).

Still, it was a serious regression and I found a part of the nasty long patch we worked on back then that was faulty, and introduced a small fix to correct that. The proposed package unfortunately didn't yield any feedback, and I can only assume it will work okay for people. The result is the DLA-841-2 upload which fixes the regression. I unfortunately didn't have time to work on the other CVEs affecting apache2 in LTS at the time of writing.

Triage

I also did some miscellaneous triage by filing Debian bug #867477 for poppler in an effort to document better the pending issue.

Next up was some minor work on eglibc issues. CVE-2017-8804 has a patch, but it's been disputed. since the main victim of this and the core of the vulnerability (rpcbind) has already been fixed, I am not sure this vulnerability is still a thing in LTS at all.

I also looked at CVE-2014-9984, but the code is so different in wheezy that I wonder if LTS is affected at all. Unfortunately, the eglibc gymnastics are a little beyond me and I do not feel confident enough to just push those issues aside for now and let them open for others to look at.

Other free software work

And of course, there's my usual monthly volunteer work. My ratio is a little better this time, having reached an about even ratio between paid and volunteer work, whereas this was 60% volunteer work in march.

Announcing ecdysis

I recently published ecdysis, a set of template and code samples that I frequently reuse across project. This is probably the least pronounceable project name I have ever chosen, but this is somewhat on purpose. The purpose of this project is not collaboration or to become a library: it's just a personal project which I share with the world as a curiosity.

To quote the README file:

The name comes from what snakes and other animals do to "create a new snake": they shed their skin. This is not so appropriate for snakes, as it's just a way to rejuvenate their skin, but is especially relevant for anthropods since the ecdysis may be associated with a metamorphosis:

Ecdysis is the moulting of the cuticle in many invertebrates of the clade Ecdysozoa. Since the cuticle of these animals typically forms a largely inelastic exoskeleton, it is shed during growth and a new, larger covering is formed. The remnants of the old, empty exoskeleton are called exuviae. — Wikipedia

So this project is metamorphosed into others when the documentation templates, code examples and so on are reused elsewhere. For that reason, the license is an unusally liberal (for me) MIT/Expat license.

The name also has the nice property of being absolutely unpronounceable, which makes it unlikely to be copied but easy to search online.

It was an interesting exercise to go back into older projects and factor out interesting code. The process is not complete yet, as there are older projects I'm still curious in reviewing. A bunch of that code could also be factored into upstream project and maybe even the Python standard library.

In short, this is stuff I keep on forgetting how to do: a proper setup.py config, some fancy argparse extensions and so on.

Beets experiments

Since I started using Subsonic (or Libresonic) to manage the music on my phone, album covers are suddenly way more interesting. But my collection so far has had limited album covers: my other media play (gmpc) would download those on the fly on its own and store them in its own database - not on the filesystem. I guess this could be considered to be a limitation of Subsonic, but I actually appreciate the separation of duty here: garbage in, garbage out. The quality of Subsonic's rendering depends largely on how well setup your library and tags are.

It turns out there is an amazing tool called beets to do exactly that kind of stuff. I originally discarded that "media library management system for obsessive-compulsive [OC] music geeks", trying to convince myself i was not an "OC music geek". Turns out I am. Oh well.

Thanks to beets, I was able to download album covers for a lot of the albums in my collection. The only covers that are missing now are albums that are not correctly tagged and that beets couldn't automatically fix up. I still need to go through those and fix all those tags, but the first run did an impressive job at getting album covers.

Then I got the next crazy idea: after a camping trip where we forgot (again) the lyrics to Georges Brassens, I figured I could start putting some lyrics on my ebook reader. "How hard can that be?" of course, being the start of another crazy project. A pull request and 3 days later, I had something that could turn a beets lyrics database into a Sphinx document which, in turn, can be turned into an ePUB. In the process, I probably got blocked from MusixMatch a hundred times, but it's done. Phew!

The resulting e-book is about 8000 pages long, but is still surprisingly responsive. In the process, I also happened to do a partial benchmark of Python's bloom filter libraries. The biggest surprise there was the performance of the set builtin: for small items, it is basically as fast as a bloom filter. Of course, when the item size grows larger, its memory usage explodes, but in this case it turned out to be sufficient and bloom filter completely overkill and confusing.

Oh, and thanks to those efforts, I got admitted in the beetbox organization on GitHub! I am not sure what I will do with that newfound power: I was scratching an itch, really. But hopefully I'll be able to help here and there in the future as well.

Debian package maintenance

I did some normal upkeep on a bunch of my packages this month, that were long overdue:

  • uploaded slop 6.3.47-1: major new upstream release
  • uploaded an NMU for maim 5.4.64-1.1: maim was broken by the slop release
  • uploaded pv 1.6.6-1: new upstream release
  • uploaded kedpm 1.0+deb8u1 to jessie (oldstable): one last security fix (Debian bug #860817, CVE-2017-8296) for that derelict password manager
  • uploaded charybdis 3.5.5-1: new minor upstream release, with optional support for mbedtls
  • filed Debian bug #866786 against cryptsetup to make the remote initramfs SSH-based unlocking support multiple devices: thanks to the maintainer, this now works flawlessly in buster and may be backported to stretch
  • expanded on Debian bug #805414 against gdm3 and Debian bug #845938 against pulseaudio, because I had trouble connecting my computer to this new Bluetooth speaker. turns out this is a known issue in Pulseaudio: whereas it releases ALSA devices, it doesn't release Bluetooth devices properly. Documented this more clearly in the wiki page
  • filed Debian bug #866790 regarding old stray Apparmor profiles that were lying around my system after an upgrade, which got me interested in Debian bug #830502 in turn
  • filed Debian bug #868728 against cups regarding a weird behavior I had interacting with a network printer. turns out the other workstation was misconfigured... why are printers still so hard?
  • filed Debian bug #870102 to automate sbuild schroots upgrades
  • after playing around with rash tried to complete the packaging (Debian bug #754972) of percol with this pull request upstream. this ended up to be way too much overhead and I reverted to my old normal history habits.

Dirk Eddelbuettel: Updated overbought/oversold plot function

30 July, 2017 - 00:34

A good six years ago I blogged about plotOBOS() which charts a moving average (from one of several available variants) along with shaded standard deviation bands. That post has a bit more background on the why/how and motivation, but as a teaser here is the resulting chart of the SP500 index (with ticker ^GSCP):

 

The code uses a few standard finance packages for R (with most of them maintained by Joshua Ulrich given that Jeff Ryan, who co-wrote chunks of these, is effectively retired from public life). Among these, xts had a recent release reflecting changes which occurred during the four (!!) years since the previous release, and covering at least two GSoC projects. With that came subtle API changes: something we all generally try to avoid but which is at times the only way forward. In this case, the shading code I used (via polygon() from base R) no longer cooperated with the beefed-up functionality of plot.xts(). Luckily, Ross Bennett incorporated that same functionality into a new function addPolygon --- which even credits this same post of mine.

With that, the updated code becomes

## plotOBOS -- displaying overbough/oversold as eg in Bespoke's plots
##
## Copyright (C) 2010 - 2017  Dirk Eddelbuettel
##
## This is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 2 of the License, or
## (at your option) any later version.

suppressMessages(library(quantmod))     # for getSymbols(), brings in xts too
suppressMessages(library(TTR))          # for various moving averages

plotOBOS <- function(symbol, n=50, type=c("sma", "ema", "zlema"),
                     years=1, blue=TRUE, current=TRUE, title=symbol,
                     ticks=TRUE, axes=TRUE) {

    today <- Sys.Date()
    if (class(symbol) == "character") {
        X <- getSymbols(symbol, from=format(today-365*years-2*n), auto.assign=FALSE)
        x <- X[,6]                          # use Adjusted
    } else if (inherits(symbol, "zoo")) {
        x <- X <- as.xts(symbol)
        current <- FALSE                # don't expand the supplied data
    }

    n <- min(nrow(x)/3, 50)             # as we may not have 50 days

    sub <- ""
    if (current) {
        xx <- getQuote(symbol)
        xt <- xts(xx$Last, order.by=as.Date(xx$`Trade Time`))
        colnames(xt) <- paste(symbol, "Adjusted", sep=".")
        x <- rbind(x, xt)
        sub <- paste("Last price: ", xx$Last, " at ",
                     format(as.POSIXct(xx$`Trade Time`), "%H:%M"), sep="")
    }

    type <- match.arg(type)
    xd <- switch(type,                  # compute xd as the central location via selected MA smoother
                 sma = SMA(x,n),
                 ema = EMA(x,n),
                 zlema = ZLEMA(x,n))
    xv <- runSD(x, n)                   # compute xv as the rolling volatility

    strt <- paste(format(today-365*years), "::", sep="")
    x  <- x[strt]                       # subset plotting range using xts' nice functionality
    xd <- xd[strt]
    xv <- xv[strt]

    xyd <- xy.coords(.index(xd),xd[,1]) # xy coordinates for direct plot commands
    xyv <- xy.coords(.index(xv),xv[,1])

    n <- length(xyd$x)
    xx <- xyd$x[c(1,1:n,n:1)]           # for polygon(): from first point to last and back

    if (blue) {
        blues5 <- c("#EFF3FF", "#BDD7E7", "#6BAED6", "#3182BD", "#08519C") # cf brewer.pal(5, "Blues")
        fairlylight <<- rgb(189/255, 215/255, 231/255, alpha=0.625) # aka blues5[2]
        verylight <<- rgb(239/255, 243/255, 255/255, alpha=0.625) # aka blues5[1]
        dark <<- rgb(8/255, 81/255, 156/255, alpha=0.625) # aka blues5[5]
        ## buglet in xts 0.10-0 requires the <<- here
    } else {
        fairlylight <<- rgb(204/255, 204/255, 204/255, alpha=0.5)  # two suitable grays, alpha-blending at 50%
        verylight <<- rgb(242/255, 242/255, 242/255, alpha=0.5)
        dark <<- 'black'
    }

    plot(x, ylim=range(range(x, xd+2*xv, xd-2*xv, na.rm=TRUE)), main=title, sub=sub, 
         major.ticks=ticks, minor.ticks=ticks, axes=axes) # basic xts plot setup
    addPolygon(xts(cbind(xyd$y+xyv$y, xyd$y+2*xyv$y), order.by=index(x)), on=1, col=fairlylight)  # upper
    addPolygon(xts(cbind(xyd$y-xyv$y, xyd$y+1*xyv$y), order.by=index(x)), on=1, col=verylight)    # center
    addPolygon(xts(cbind(xyd$y-xyv$y, xyd$y-2*xyv$y), order.by=index(x)), on=1, col=fairlylight)  # lower
    lines(xd, lwd=2, col=fairlylight)   # central smooted location
    lines(x, lwd=3, col=dark)           # actual price, thicker
}

and the main change are the three calls to addPolygon. To illustrate, we call plotOBOS("SPY", years=2) with an updated plot of the ETF representing the SP500 over the last two years:

 

Comments and further enhancements welcome!

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Robert McQueen: Welcome, Flathub!

29 July, 2017 - 23:24

At the Gtk+ hackfest in London earlier this year, we stole an afternoon from the toolkit folks (sorry!) to talk about Flatpak, and how we could establish a “critical mass” behind the Flatpak format. Bringing Linux container and sandboxing technology together with ostree, we’ve got a technology which solves real world distribution, technical and security problems which have arguably held back the Linux desktop space and frustrated ISVs and app developers for nearly 20 years. The problem we need to solve, like any ecosystem, is one of users and developers – without stuff you can easily get in Flatpak format, there won’t be many users, and without many users, we won’t have a strong or compelling incentive for developers to take their precious time to understand a new format and a new technology.

As Alex Larsson said in his GUADEC talk yesterday: Decentralisation is good. Flatpak is a tool that is totally agnostic of who is publishing the software and where it comes from. For software freedom, that’s an important thing because we want technology to empower users, rather than tell them what they can or can’t do. Unfortunately, decentralisation makes for a terrible user experience. At present, the Flatpak webpage has a manually curated list of links to 10s of places where you can find different Flatpaks and add them to your system. You can’t easily search and browse to find apps to try out – so it’s clear that if the current situation remains we’re not going to be able to get a critical mass of users and developers around Flatpak.

Enter Flathub. The idea is that by creating an obvious “center of gravity” for the Flatpak community to contribute and build their apps, users will have one place to go and find the best that the Linux app ecosystem has to offer. We can take care of the boring stuff like running a build service and empower Linux application developers to choose how and when their app gets out to their users. After the London hackfest we sketched out a minimum viable system – Github, Buildbot and a few workers – and got it going over the past few months, culminating in a mini-fundraiser to pay for the hosting of a production-ready setup. Thanks to the 20 individuals who supported our fundraiser, to Mythic Beasts who provided a server along with management, monitoring and heaps of bandwidth, and to Codethink and Scaleway who provide our ARM and Intel workers respectively.

We inherit our core principles from the Flatpak project – we want the Flatpak technology to succeed at alleviating the issues faced by app developers in targeting a diverse set of Linux platforms. None of this stops you from building and hosting your own Flatpak repos and we look forward to this being a wide and open playing field. We care about the success of the Linux desktop as a platform, so we are open to proprietary applications through Flatpak’s “extra data” feature where the client machine downloads 3rd party binaries. They are correctly labeled as such in the AppStream, so will only be shown if you or your OS has configured GNOME Software to show you apps with proprietary licenses, respecting the user’s preference.

The new infrastructure is up and running and I put it into production on Thursday. We rebuilt the whole repository on the new system over the course of the week, signing everything with our new 4096-bit key stored on a Yubikey smartcard USB key. We have 66 apps at the moment, although Alex is working on bringing in the GNOME apps at present – we hope those will be joined soon by the KDE apps, and Endless is planning to move over as many of our 3rd party Flatpaks as possible over the coming months.

So, thanks again to Alex and the whole Flatpak community, and the individuals and the companies who supported making this a reality. You can add the repository and get downloading right away. Welcome to Flathub! Go forth and flatten…

Robert McQueen: Hello again!

29 July, 2017 - 18:45

Like all good blog posts, this one starts with an apology about not blogging for ages – in my case it looks like it’s been about 7 years which is definitely a new personal best (perhaps the equally or more remarkable thing is that I have diligently kept WordPress running in the meantime). In that time, as you might expect, a few things have happened, like I met a wonderful woman and fell in love and we have two wonderful children. I also decided to part ways with my “first baby” and leave my role as CTO & co-founder of Collabora. This was obviously a very tough decision – it’s a fantastic team where I met and made many life-long friends, and they are still going strong and doing awesome things with Open Source. However, shortly after that, in February last year, I was lucky enough to land what is basically a dream job working at Endless Computers as the VP of Deployment.

As I’m sure most readers know, Endless is building an OS to bring personal computing to millions of new users across the world. It’s based on Free Software like GNOME, Debian, ostree and Flatpak, and the more successful Endless is, the more people who get access to education, technology and opportunity – and the more FOSS users and developers there will be in the world. But in my role, I get to help define the product, understand our users and how technology might help them, take Open Source out to new people, solve commercial problems to get Endless OS out into the world, manage a fantastic team and work with bright people, learn from great managers and mentors, and still find time to squash bugs, review and write more code than I used to. Like any startup, we have a lot to do and not enough time to do it, so although there aren’t quite enough days in the week, I’m really happy!

In any case, the main point of this blog post is that I’m at GUADEC in Manchester right now, and I’d like to blog about Flathub, but I thought it would be weird to just show up and say that after 7 years of silence without saying hello again.

Russell Coker: Apache Mesos on Debian

29 July, 2017 - 16:04

I decided to try packaging Mesos for Debian/Stretch. I had a spare system with a i7-930 CPU, 48G of RAM, and SSDs to use for building. The i7-930 isn’t really fast by today’s standards, but 48G of RAM and SSD storage mean that overall it’s a decent build system – faster than most systems I run (for myself and for clients) and probably faster than most systems used by Debian Developers for build purposes.

There’s a github issue about the lack of an upstream package for Debian/Stretch [1]. That upstream issue could probably be worked around by adding Jessie sources to the APT sources.list file, but a package for Stretch is what is needed anyway.

Here is the documentation on building for Debian [2]. The list of packages it gives as build dependencies is incomplete, it also needs zlib1g-dev libapr1-dev libcurl4-nss-dev openjdk-8-jdk maven libsasl2-dev libsvn-dev. So BUILDING this software requires Java + Maven, Ruby, and Python along with autoconf, libtool, and all the usual Unix build tools. It also requires the FPM (Fucking Package Management) tool, I take the choice of name as an indication of the professionalism of the author.

Building the software on my i7 system took 79 minutes which includes 76 minutes of CPU time (I didn’t use the -j option to make). At the end of the build it turned out that I had mistakenly failed to install the Fucking Package Management “gem” and it aborted. At this stage I gave up on Mesos, the pain involved exceeds my interest in trying it out.

How to do it Better

One of the aims of Free Software is that bugs are more likely to get solved if many people look at them. There aren’t many people who will devote 76 minutes of CPU time on a moderately fast system to investigate a single bug. To deal with this software should be prepared as components. An example of this is the SE Linux project which has 13 source modules in the latest release [3]. Of those 13 only 5 are really required. So anyone who wants to start on SE Linux from source (without considering a distribution like Debian or Fedora that has it packaged) can build the 5 most important ones. Also anyone who has an issue with SE Linux on their system can find the one source package that is relevant and study it with a short compile time. As an aside I’ve been working on SE Linux since long before it was split into so many separate source packages and know the code well, but I still find the separation convenient – I rarely need to work on more than a small subset of the code at one time.

The requirement of Java, Ruby, and Python to build Mesos could be partly due to language interfaces to call Mesos interfaces from Ruby and Python. Ohe solution to that is to have the C libraries and header files to call Mesos and have separate packages that depend on those libraries and headers to provide the bindings for other languages. Another solution is to have autoconf detect that some languages aren’t installed and just not try to compile bindings for them (this is one of the purposes of autoconf).

The use of a tool like Fucking Package Management means that you don’t get help from experts in the various distributions in making better packages. When there is a FOSS project with a debian subdirectory that makes barely functional packages then you will be likely to have an experienced Debian Developer offer a patch to improve it (I’ve offered patches for such things on many occasions). When there is a FOSS project that uses a tool that is never used by Debian developers (or developers of Fedora and other distributions) then the only patches you will get will be from inexperienced people.

A software build process should not download anything from the Internet. The source archive should contain everything that is needed and there should be dependencies for external software. Any downloads from the Internet need to be protected from MITM attacks which means that a responsible software developer has to read through the build system and make sure that appropriate PGP signature checks etc are performed. It could be that the files that the Mesos build downloaded from the Apache site had appropriate PGP checks performed – but it would take me extra time and effort to verify this and I can’t distribute software without being sure of this. Also reproducible builds are one of the latest things we aim for in the Debian project, this means we can’t just download files from web sites because the next build might get a different version.

Finally the fpm (Fucking Package Management) tool is a Ruby Gem that has to be installed with the “gem install” command. Any time you specify a gem install command you should include the -v option to ensure that everyone is using the same version of that gem, otherwise there is no guarantee that people who follow your documentation will get the same results. Also a quick Google search didn’t indicate whether gem install checks PGP keys or verifies data integrity in other ways. If I’m going to compile software for other people to use I’m concerned about getting unexpected results with such things. A Google search indicates that Ruby people were worried about such things in 2013 but doesn’t indicate whether they solved the problem properly.

Related posts:

  1. SE Linux Status in Debian 2011-10 Debian/Unstable Development deb http://www.coker.com.au wheezy selinux The above APT sources.list...
  2. Debian Work and Upstream Steve Kemp writes about security issues with C programs [1]....
  3. Getting Started with Amazon EC2 The first thing you need to do to get started...

Chris Lamb: More Lintian hacking

29 July, 2017 - 15:31

Lintian is static analysis tool for Debian packages, reporting on various errors, omissions and quality-assurance issues to the maintainer.

I seem to have found myself hacking on it a bit more recently (see my previous installment). In particular, here's the code of mine — which made for a total of 20 bugs closed — that made it into the recent 2.5.52 release:

New tags
  • Check for the presence of an .asc signature in a .changes file if an upstream signing key is present. (#833585, tag)
  • Warn when dpkg-statoverride --add is called without a corresponding --list. (#652963, tag)
  • Check for years in debian/copyright that are later than the top entry in debian/changelog. (#807461, tag)
  • Trigger a warning when DEB_BUILD_OPTIONS is used instead of DEB_BUILD_MAINT_OPTIONS. (#833691, tag)
  • Look for "FIXME" and similar placeholders in various files in the debian directory. (#846009, tag)
  • Check for useless build-dependencies on dh-autoreconf or autotools-dev under Debhelper compatibility levels 10 or higher. (#844191, tag)
  • Emit a warning if GObject Introspection packages are missing dependencies on ${gir:Depends}. (#860801, tag)
  • Check packages do not contain upstart configuration under /etc/init. (#825348, tag)
  • Emit a classification tag if maintainer scripts such as debian/postinst is an ELF binary. (tag)
  • Check for overly-generic manual pages such as README.3pm.gz. (#792846, tag)
  • Ensure that (non-ELF) maintainer scripts begin with #!. (#843428, tag)
Regression fixes
  • Ensure r-data-without-readme-source checks the source package, not the binary; README.source files are not installed in the latter. (#866322, tag)
  • Don't emit source-contains-prebuilt-ms-help-file for files generated by Halibut. (#867673, tag)
  • Add .yml to the list of file extensions to avoid false positives when emitting extra-license-file. (#856137, tag)
  • Append a regression test for enumerated lists in the "a) b) c) …" style, which would previously trigger a "duplicate word" warning if the following paragraph began with an "a." (#844166, tag)
Documentation updates
  • Rename copyright-contains-dh-make-perl-boilerplate to copyright-contains-automatically-extracted-boilerplate as it can be generated by other tools such as dh-make-elpa. (#841832, tag)
  • Changes to new-package-should-not-package-python2-module (tag):
    • Upgrade from I: to W:. (#829744)
    • Clarify wording in description to make the justification clearer.
  • Clarify justification in debian-rules-parses-dpkg-parsechangelog. (#865882, tag)
  • Expand the rationale for the latest-debian-changelog-entry-without-new-date tag to mention possible implications for reproducible builds. (tag)
  • Update the source-contains-prebuilt-ms-help-file description; there exists free software to generate .chm files. (tag)
  • Append an example shell snippet to explain how to prevent init.d-script-sourcing-without-test. (tag)
  • Add a missing "contains" verb to the description of the debhelper-autoscript-in-maintainer-scripts tag. (tag)
  • Consistently use the same "Debian style" RFC822 date format for both "Mirror timestamp" and "Last updated" on the Lintian index page. (#828720)
Misc
  • Allow the use of suppress-tags=<tag>[,<tag>[,<tag>]] in ~/.lintianrc. (#764486)
  • Improve the support for "3.0 (git)" packages. However, they remain marked as unsupported-source-format as they are not accepted by the Debian archive. (#605999)
  • Apply patch from Dylan Aïssi to also check for .RData files (not just .Rdata) when checking for the copyright status of R Project data files. (#868178, tag)
  • Match more Lena Söderberg images. (#827941, tag)
  • Refactor a hard-coded list of possible upstream key locations to the common/signing-key-filenames Lintian::Data resource.

Norbert Preining: Gaming: The Long Dark

29 July, 2017 - 09:15

I normally don’t play survival games or walking simulators, but The Long Dark by Hinterland Games, which I purchased back then when it was still in early access on Steam, took me into new realms. You are tossed out into the Canadian wilderness with hardly anything, and your only aim is to survive, find shelter, food, craft tools, hunt for food, explore. And while everything by now is Sandbox mode, on August 1st the first episode of Story mode is released. Best time to get the game!

You will be greeted with some icy nights, but also with great vistas, relaxed evenings at a fire place, you will try to survive on moldy food and rotten energy bars, but also feast on the fireside while reading a good book. A real treat this game!

Sandbox mode features five different areas to explore. Each one is large enough to spend weeks (in game time) wandering around. The easiest area to start with is Mystery Lake, with plenty of shelter (several huts) and abundance of resources. And just in case you are getting bored, all the areas are connected via tunnels or caves and one can wander of into the neighboring places. My home in Mystery Lake was always the Camp Office, the usual suspect. Nice views, fishing huts nearby to get fresh fish, lots of space.

After managing to get from your starting place (which is arbitrary as far as I see) to one of the shelters, one starts collecting food, wood, savaging every accessible place for tools, weapons, burning material. And soon the backpack becomes to heavy, and one needs to store stuff and decide what to take.

This is a very well done part of the game. The backpack is not limited by number of items, but you are limited in weight you can carry. That includes clothes (which can get quite heavy) and all the items in your backpack. In addition, the longer the day and the more tired you become, the less weight you can carry. And if the backpack starts getting too heavy you crawl to a very slow movement.

There are many influences of the outside world on the player’s condition: temperature, the wetness of your clothes, hunger, thirst, exhaustion, but also infections and bruises, all need to be taken care of, otherwise the end is coming faster than one wishes for.

I have only two things to complain: First, if one walks outside, or runs outside, the own body temperature does not rise. This is unrealistically and should have been taken into account. The other thing is the difficulty: I have played weeks in game time in the easiest level, without any problem. But the moment I switched to the second level of difficulty (of 5!), I not even manage it for 2(!) days. Wolves, starvation, thirst, any of those kills me within an instant. I don’t want to know how the hardest level feels, but it has a certain steep step here.

The game takes a very realistic view onto the weather: Every day is different, sunny, foggy, blizzard, windy, often changing very quickly. It is wise to plan one’s activities according to the weather, as it is very unforgiving.

With beautifully crafted landscapes, loads of areas to explore, your own pride to survive for at least a few weeks, and lots of tools to find and craft and try out, this games, even while it is still in Sandbox mode, is a real feat. My absolute favorite since I have finished the Talos Principle and Portal series, absolutely recommendable!

Steve Kemp: So I'm considering a new project

29 July, 2017 - 04:00

In the past there used to be a puppet-labs project called puppet-dashboard, which would let you see the state of your managed-nodes. Having even a very basic and simple "report user-interface" is pretty neat when you're pushing out a change, and you want to see it be applied across your fleet of hosts.

There are some other neat features, such as allowing you to identify failures easily, and see nodes that haven't reported-in recently.

This was spun out into a community-supported project which is largely stale:

Having a dashboard is nice, but the current state of the software is less good. It turns out that the implementation is pretty simple though:

  • Puppet runs on a node.
  • The node reports back to the puppet-master what happened.
  • The puppet-master can optionally HTTP-post that report to the reporting node.

The reporting node can thus receive real-time updates, and do what it wants with them. You can even sidestep the extra server if you wish:

  • The puppet-master can archive the reports locally.

For example on my puppet-master I have this:

  root@master /var/lib/puppet/reports # ls | tail -n4
  smaug.dh.bytemark.co.uk
  ssh.steve.org.uk
  www.dns-api.com
  www.steve.org.uk

Inside each directory is a bunch of YAML files which describe the state of the host, and the recipes that were applied. Parsing those is pretty simple, the hardest part would be making a useful/attractive GUI. But happily we have the existing one to "inspire" us.

I think I just need to write down a list of assumptions and see if they make sense. After all the existing installation(s) won't break, it's just a matter of deciding whether it is useful/worthwhile way to spend some time.

  • Assume you have 100+ hosts running puppet 4.x
  • Assume you want a broad overview:
    • All the nodes you're managing.
    • Whether their last run triggered a change, resulted in an error, or logged anything.
    • If so what changed/failed/was output?
  • For each individual run you want to see:
    • Rough overview.
  • Assume you don't want to keep history indefinitely, just the last 50 runs or so of each host.

Beyond that you might want to export data about the managed-nodes themselves. For example you might want a list of all the hosts which have "bash" installed on them. Or "All nodes with local user "steve"." I've written that stuff already, as it is very useful for auditing & etc.

The hard part about that is that to get the extra data you'll need to include a puppet module to collect it. I suspect a new dashboard would be broadly interesting/useful but unless you have that extra detail it might not be so useful. You can't point to a slightly more modern installation and say "Yes this is worth migrating to". But if you have extra meta-data you can say:

  • Give me a list of all hosts running wheezy.
  • Give me a list of all hosts running exim4 version 4.84.2-2+deb8u4.

And that facility is very useful when you have shellshock, or similar knocking at your door.

Anyway as a hacky start I wrote some code to parse reports, avoiding the magic object-fu that the YAML would usually invoke. The end result is this:

 root@master ~# dump-run www.steve.org.uk
 www.steve.org.uk
    Puppet Version: 4.8.2
    /var/lib/puppet/reports/www.steve.org.uk/201707291813.yaml
    Runtime: 2.16
    Status:changed
    Time:2017-07-29 18:13:04 +0000
    Resources
            total -> 176
            skipped -> 2
            failed -> 0
            changed -> 3
            out_of_sync -> 3
            scheduled -> 0
            corrective_change -> 3
    Changed Resources
            Ssh_authorized_key[skx@shelob-s-fi] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:17
            Ssh_authorized_key[skx@deagol-s-fi] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:22
            Ssh_authorized_key[steve@ssh.steve.org.uk-s-fi] /etc/puppet/code/environments/production/modules/ssh_keys/manifests/init.pp:27
    Skipped Resources
            Exec[clone sysadmin utils]
            Exec[update sysadmin utils]

Osamu Aoki: exim4 configuration for Desktop (better gmail support)

28 July, 2017 - 23:57
For most of our Desktop PC running with stock exim4 and mutt, I think sending out mail is becoming a bit rough since using random smarthost causes lots of trouble due to the measures taken to prevent spams.

As mentioned in Exim4 user FAQ , /etc/hosts should have FQDN with external DNS resolvable domain name listed instead of localdomain to get the correct EHLO/HELO line.  That's the first step.

The stock configuration of exim4 only allows you to use single smarthost for all your mails.  I use one address for my personal use which is checked by my smartphone too.  The other account is for subscribing to the mailing list.  So I needed to tweak ...

Usually, mutt is smart enough to set the From address since my .muttrc has

# Set default for From: for replyes for alternates.
set reverse_name

So how can I teach exim4 to send mails depending on the  mail accounts listed in the From header.

For my gmail accounts, each mail should be sent to the account specific SMTP connection matching your From header to get all the modern SPAM protection data in right state.  DKIM, SPF, DMARC...  (Besides, they overwrite From: header anyway if you use wrong connection.)

For my debian.org mails, mails should be sent from my shell account on people.debian.org so it is very unlikely to be blocked.  Sometimes, I wasn't sure some of these debian.org mails sent through my ISP's smarthost are really getting to the intended person.

To these ends, I have created small patches to the /etc/exim4/conf.d files and reported it to Debian BTS: #869480 Support multiple smarthosts (gmail support).  These patches are for the source package.

To use my configuration tweak idea, you have easier route no matter which exim version you are using.  Please copy and read pertinent edited files from my github site to your installed /etc/exim4/conf.d files and get the benefits.
If you really wish to keep envelope address etc. to match From: header, please rewite agressively using the From: header using eddited rewrite/31_exim4-config_rewriting as follows:
.ifndef NO_EAA_REWRITE_REWRITE
*@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
# identical rewriting rule for /etc/mailname
*@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
.endif
* "$h_from:" Frs

So far its working fine for me but if you find bug, let me know.

Osamu

Joachim Breitner: How is coinduction the dual of induction?

28 July, 2017 - 09:05

Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this.

The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction.

Inductive lists

I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists:

datatype 'a list = nil | cons (hd : 'a) (tl : "'a list")

with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized):

P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs
Coinductive lists

In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing

codatatype 'a llist = lnil | lcons (hd : 'a)  (tl : "'a llist")

we get the following coinduction principle:

(∀ xs ys.
    R xs ys' → (xs = lnil) = (ys = lnil) ∧
               (xs ≠ lnil ⟶ ys' ≠ lnil ⟶
	         hd xs = hd ys ∧ R (tl xs) (tl ys))) →
→ (∀ xs ys. R xs ys → xs = ys)

This is less scary that it looks at first. It tell you “if you give me a relation R between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R, then any two lists related by R are actually equal.”

If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation.

So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P, in the other a relation R, to point out just one difference.

Relation induction

To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle.

The datatype declaration automatically creates a relator:

rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool

The definition of rel_list R xs ys is that xs and ys have the same shape (i.e. length), and that the corresponding elements are pairwise related by R. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation:

Q nil nil →
(∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) →
(∀xs ys → rel_list R xs ys → Q xs ys)

Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys ↔ (P xs ∧ xs = ys) and R x y ↔ (x = y), then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction.

Relation coinduction

The same observation can be made in the coinductive world. Here, as well, the codatatype declaration introduces a function

rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool

which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too:

(∀xs ys.
    R xs ys → (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys))) →
(∀ xs ys → R xs ys → rel_llist Q xs ys)

It is even more obvious that this is a generalization of the standard coinduction principle shown above: Just instantiate Q with equality, which turns rel_llist Q into equality on the lists, and you have the theorem above.

The duality

With our induction and coinduction principle generalized to relations, suddenly a duality emerges: If you turn around the implication in the conclusion of one you get the conclusion of the other one. This is an example of “cosomething is something with arrows reversed”.

But what about the premise(s) of the rules? What happens if we turn around the arrow here? Although slighty less immediate, it turns out that they are the same as well. To see that, we start with the premise of the coinduction rule, reverse the implication and then show that to be equivalent to the two premises of the induction rule:

(∀xs ys.
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { case analysis (the other two cases are vacuously true) }
  (∀xs ys.
    xs = lnil → ys = lnil →
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
∧ (∀xs ys.
    xs ≠ lnil ⟶ ys ≠ lnil
    R xs ys ← (xs = lnil) = (ys = lnil) ∧
              (xs ≠ lnil ⟶ ys ≠ lnil ⟶
	        Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys)))
= { simplification }
  (∀xs ys.  xs = lnil → ys = lnil → R xs ys
∧ (∀x xs y ys.  R (cons x xs) (cons y ys) ← (Q x y ∧ R xs ys))
= { more rewriting }
  R nil nil
∧ (∀x xs y ys. Q x y → R xs ys → R (cons x xs) (cons y ys))
Conclusion

The coinduction rule is not the direct dual of the induction rule, but both are specializations of more general, relational proof methods, where the duality is clearly present.

More generally, this little excursion shows that it is often beneficial to think of types less as sets, and more as relations – this way of thinking is surprisingly fruitful, and led to proofs of parametricity and free theorems and other nice things.

NOKUBI Takatsugu: uim package integraion is working in progress

28 July, 2017 - 07:44

Now I am helping to maintain uim-package. It is splited into too many packages (over 40, including dbgsym).

Working status is written as the debian wiki entry https://wiki.debian.org/JapaneseEnvironment/Uim.

Currentry it is not finished, when I’ve done all of them, I’ll upload them to experimental.

Joachim Breitner: Coinduction in Coq and Isabelle

28 July, 2017 - 03:24

The DeepSpec Summer School is almost over, and I have had a few good discussions. One revolved around coinduction: What is it, how does it differ from induction, and how do you actually prove something. In the course of the discussion, I came up with a very simple coinductive exercise, and solved it both in Coq and Isabelle

The task

Define the extended natural numbers coinductively. Define the min function and the  ≤  relation. Show that min(n, m) ≤ n holds.

Coq

The definitions are straight forward. Note that in Coq, we use the same command to define a coinductive data type and a coinductively defined relation:

CoInductive ENat :=
  | N : ENat
  | S : ENat -> ENat.

CoFixpoint min (n : ENat) (m : ENat)
  :=match n, m with | S n', S m' => S (min n' m')
                    | _, _       => N end.

CoInductive le : ENat -> ENat -> Prop :=
  | leN : forall m, le N m
  | leS : forall n m, le n m -> le (S n) (S m).

The lemma is specified as

Lemma min_le: forall n m, le (min n m) n.

and the proof method of choice to show that some coinductive relation holds, is cofix. One would wish that the following proof would work:

Lemma min_le: forall n m, le (min n m) n.
Proof.
  cofix.
  destruct n, m.
  * apply leN.
  * apply leN.
  * apply leN.
  * apply leS.
    apply min_le.
Qed.

but we get the error message

Error:
In environment
min_le : forall n m : ENat, le (min n m) n
Unable to unify "le N ?M170" with "le (min N N) N

Effectively, as Coq is trying to figure out whether our proof is correct, i.e. type-checks, it stumbled on the equation min N N = N, and like a kid scared of coinduction, it did not dare to “run” the min function. The reason it does not just “run” a CoFixpoint is that doing so too daringly might simply not terminate. So, as Adam explains in a chapter of his book, Coq reduces a cofixpoint only when it is the scrutinee of a match statement.

So we need to get a match statement in place. We can do so with a helper function:

Definition evalN (n : ENat) :=
  match n with | N => N
               | S n => S n end.

Lemma evalN_eq : forall n, evalN n = n.
Proof. intros. destruct n; reflexivity. Qed.

This function does not really do anything besides nudging Coq to actually evaluate its argument to a constructor (N or S _). We can use it in the proof to guide Coq, and the following goes through:

Lemma min_le: forall n m, le (min n m) n.
Proof.
  cofix.
  destruct n, m; rewrite <- evalN_eq with (n := min _ _).
  * apply leN.
  * apply leN.
  * apply leN.
  * apply leS.
    apply min_le.
Qed.
Isabelle

In Isabelle, definitions and types are very different things, so we use different commands to define ENat and le:

theory ENat imports  Main begin

codatatype ENat =  N | S  ENat

primcorec min where
   "min n m = (case n of
       N ⇒ N
     | S n' ⇒ (case m of
        N ⇒ N
      | S m' ⇒ S (min n' m')))"

coinductive le where
  leN: "le N m"
| leS: "le n m ⟹ le (S n) (S m)"

There are actually many ways of defining min; I chose the one most similar to the one above. For more details, see the corec tutorial.

Now to the proof:

lemma min_le: "le (min n m) n"
proof (coinduction arbitrary: n m)
  case le
  show ?case
  proof(cases n)
    case N then show ?thesis by simp
  next
    case (S n') then show ?thesis
    proof(cases m)
      case N then show ?thesis by simp
    next
      case (S m')  with ‹n = _› show ?thesis
        unfolding min.code[where n = n and m = m]
        by auto
    qed
  qed
qed

The coinduction proof methods produces this goal:

proof (state)
goal (1 subgoal):
 1. ⋀n m. (∃m'. min n m = N ∧ n = m') ∨
          (∃n' m'.
               min n m = S n' ∧
               n = S m' ∧
	       ((∃n m. n' = min n m ∧ m' = n) ∨ le n' m'))

I chose to spell the proof out in the Isar proof language, where the outermost proof structure is done relatively explicity, and I proceed by case analysis mimiking the min function definition.

In the cases where one argument of min is N, Isabelle’s simplifier (a term rewriting tactic, so to say), can solve the goal automatically. This is because the primcorec command produces a bunch of lemmas, one of which states n = N ∨ m = N ⟹ min n m = N.

In the other case, we need to help Isabelle a bit to reduce the call to min (S n) (S m) using the unfolding methods, where min.code contains exactly the equation that we used to specify min. Using just unfolding min.code would send this method into a loop, so we restrict it to the concrete arguments n and m. Then auto can solve the remaining goal (despite all the existential quantifiers).

Summary

Both theorem provers are able to prove the desired result. To me it seems that it is slightly more convenient in Isabelle because a lot of Coq infrastructure relies on the type checker being able to effectively evaluate expressions, which is tricky with cofixpoints, wheras evaluation plays a much less central role in Isabelle, where rewriting is the crucial technique, and while one still cannot simply throw min.code into the simpset, so working with objects that do not evaluate easily or completely is less strange.

Agda

I was challenged to do it in Agda. Here it is:

module ENat where

open import Coinduction

data ENat : Set where
  N : ENat
  S : ∞ ENat → ENat

min : ENat → ENat → ENat
min (S n') (S m') = S (♯ (min (♭ n') (♭ m')))
min _ _ = N

data le : ENat → ENat → Set where
  leN : ∀ {m} → le N m
  leS : ∀ {n m} → ∞ (le (♭ n) (♭ m)) → le (S n) (S m)

min_le : ∀ {n m} → le (min n m) n
min_le {S n'} {S m'} = leS (♯ min_le)
min_le {N}    {S m'} = leN
min_le {S n'} {N} = leN
min_le {N}    {N} = leN

I will refrain from commenting it, because I do not really know what I have been doing here, but it typechecks, and refer you to the official documentation on coinduction in Agda. But let me note that I wrote this using plain inductive types and recursion, and added ∞, ♯ and ♭ until it worked.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้