Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 18 min 53 sec ago

Petter Reinholdtsen: My own self balancing Lego Segway

4 November, 2016 - 16:15

A while back I received a Gyro sensor for the NXT Mindstorms controller as a birthday present. It had been on my wishlist for a while, because I wanted to build a Segway like balancing lego robot. I had already built a simple balancing robot with the kids, using the light/color sensor included in the NXT kit as the balance sensor, but it was not working very well. It could balance for a while, but was very sensitive to the light condition in the room and the reflective properties of the surface and would fall over after a short while. I wanted something more robust, and had the gyro sensor from HiTechnic I believed would solve it on my wishlist for some years before it suddenly showed up as a gift from my loved ones. :)

Unfortunately I have not had time to sit down and play with it since then. But that changed some days ago, when I was searching for lego segway information and came across a recipe from HiTechnic for building the HTWay, a segway like balancing robot. Build instructions and source code was included, so it was just a question of putting it all together. And thanks to the great work of many Debian developers, the compiler needed to build the source for the NXT is already included in Debian, so I was read to go in less than an hour. The resulting robot do not look very impressive in its simplicity:

Because I lack the infrared sensor used to control the robot in the design from HiTechnic, I had to comment out the last task (taskControl). I simply placed /* and */ around it get the program working without that sensor present. Now it balances just fine until the battery status run low:

Now we would like to teach it how to follow a line and take remote control instructions using the included Bluetooth receiver in the NXT.

If you, like me, love LEGO and want to make sure we find the tools they need to work with LEGO in Debian and all our derivative distributions like Ubuntu, check out the LEGO designers project page and join the Debian LEGO team. Personally I own a RCX and NXT controller (no EV3), and would like to make sure the Debian tools needed to program the systems I own work as they should.

Iain R. Learmonth: PATHspider Plugins

4 November, 2016 - 06:46

This post is cross-posted on the MAMI Project blog here.

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling — whether essential for the needed functionality or accidental as an unwanted side effect — makes it more and more difficult to deploy new protocols or extensions of existing protocols.

For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, PATHspider performs A/B testing between two different protocols or different protocol extensions to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment.

PATHspider 1.0.1 has been released today and is now available from GitHub, PyPI and Debian unstable. This is a small stable update containing a documentation fix for the example plugin.

PATHspider now contains 3 built-in plugins for measuring path transparency to explicit congestion notification, DiffServ code points and TCP Fast Open. It’s easy to write your own plugins, and if they’re good enough, they may be included in the PATHspider distribution at the next feature release.

We have a GitHub repository you can fork that has a premade directory structure for new plugins. You’ll need to implement logic for performing the two connections, for the A and the B tests. Once you’ve verified your connection logic is working with Wireshark, you can move on to writing Observer functions to analyse the connections made in real time as PATHspider runs. The final step is to merge the results of the connection logic (e.g. did the operating system report a timeout?) with the results of your observer functions (e.g. was ECN successfully negotiated?) and write out the final result.

We have dedicated a section of the manual to the development of plugins and we really see plugins as first-class citizens in the PATHspider ecosystem. While future releases of PATHspider may contain new plugins, we’re also making it easier to write plugins by providing reusable library functions such as the tcp_connect() function of the SynchronisedSpider that allows for easy A/B testing of TCP connections with any globally configured state set. We also provide reusable observer functions for simple tasks such as determining if a 3-way handshake completed or if there was an ICMP unreachable message received.

If you’d like to check out PATHspider, you can find the website at https://pathspider.net/.

Current development of PATHspider is supported by the European Union’s Horizon 2020 project MAMI. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 688421. The opinions expressed and arguments employed reflect only the authors’ view. The European Commission is not responsible for any use that may be made of that information.

Simon Josefsson: Why I don’t Use 2048 or 4096 RSA Key Sizes

4 November, 2016 - 03:32

I have used non-standard RSA key size for maybe 15 years. For example, my old OpenPGP key created in 2002. With non-standard key sizes, I mean a RSA key size that is not 2048 or 4096. I do this when I generate OpenPGP/SSH keys (using GnuPG with a smartcard like this) and PKIX certificates (using GnuTLS or OpenSSL, e.g. for XMPP or for HTTPS). People sometimes ask me why. I haven’t seen anyone talk about this, or provide a writeup, that is consistent with my views. So I wanted to write about my motivation, so that it is easy for me to refer to, and hopefully to inspire others to think similarily. Or to provoke discussion and disagreement — that’s fine, and hopefully I will learn something.


Before proceeding, here is some context: When building new things, it is usually better to use the Elliptic Curve technology algorithm Ed25519 instead of RSA. There is also ECDSA — which has had a comparatively slow uptake, for a number of reasons — that is widely available and is a reasonable choice when Ed25519 is not available. There are also post-quantum algorithms, but they are newer and adopting them today requires a careful cost-benefit analysis.

First some background. RSA is an asymmetric public-key scheme, and relies on generating private keys which are the product of distinct prime numbers (typically two). The size of the resulting product, called the modulus n, is usually expressed in bit length and forms the key size. Historically RSA key sizes used to be a couple of hundred bits, then 512 bits settled as a commonly used size. With better understanding of RSA security levels, the common key size evolved into 768, 1024, and later 2048. Today’s recommendations (see keylength.com) suggest that 2048 is on the weak side for long-term keys (5+ years), so there has been a trend to jump to 4096. The performance of RSA private-key operations starts to suffer at 4096, and the bandwidth requirements is causing issues in some protocols. Today 2048 and 4096 are the most common choices.

My preference for non-2048/4096 RSA key sizes is based on the simple and naïve observation that if I would build a RSA key cracker, there is some likelihood that I would need to optimize the implementation for a particular key size in order to get good performance. Since 2048 and 4096 are dominant today, and 1024 were dominent some years ago, it may be feasible to build optimized versions for these three key sizes.

My observation is a conservative decision based on speculation, and speculation on several levels. First I assume that there is an attack on RSA that we don’t know about. Then I assume that this attack is not as efficient for some key sizes than others, either on a theoretical level, at implementation level (optimized libraries for certain characteristics), or at an economic/human level (decision to focus on common key sizes). Then I assume that by avoiding the efficient key sizes I can increase the difficulty to a sufficient level.

Before analyzing whether those assumptions even remotely may make sense, it is useful to understand what is lost by selecting uncommon key sizes. This is to understand the cost of the trade-off.

A significant burden would be if implementations didn’t allow selecting unusual key sizes. In my experience, enough common applications support uncommon key sizes, for example GnuPG, OpenSSL, OpenSSH, FireFox, and Chrome. Some applications limit the permitted choices; this appears to be rare, but I have encountered it once. Some environments also restrict permitted choices, for example I have experienced that LetsEncrypt has introduced a requirement for RSA key sizes to be a moduli of 8. I noticed this since I chose a RSA key size of 3925 for my blog and received a certificate from LetsEncrypt in December 2015 however during renewal in 2016 it lead to an error message about the RSA key size. Some commercial CAs that I have used before restrict the RSA key size to one of 1024, 2048 or 4096 only. Some smart-cards also restrict the key sizes, sadly the YubiKey has this limitation. So it is not always possible, but possible often enough for me to be worthwhile.

Another cost is that RSA signature operations are slowed down. This is because the exponentiation function is faster than multiplication, and if the bit pattern of the RSA key is a 1 followed by several 0’s, it is quicker to compute. I have not done benchmarks, but I have not experienced that this is a practical problem for me. I don’t notice RSA operations in the flurry of all of other operations (network, IO) that is usually involved in my daily life. Deploying this on a large scale may have effects, of course, so benchmarks would be interesting.

Back to the speculation that leads me to this choice. The first assumption is that there is an attack on RSA that we don’t know about. In my mind, until there are proofs that the currently known attacks (GNFS-based attacks) are the best that can be found, or at least some heuristic argument that we can’t do better than the current attacks, the probability for an unknown RSA attack is therefor, as strange as it may sound, 100%.

The second assumption is that the unknown attack(s) are not as efficient for some key sizes than others. That statement can also be expressed like this: the cost to mount the attack is higher for some key sizes compared to others.

At the implementation level, it seems reasonable to assume that implementing a RSA cracker for arbitrary key sizes could be more difficult and costlier than focusing on particular key sizes. Focusing on some key sizes allows optimization and less complex code.

At the mathematical level, the assumption that the attack would be costlier for certain types of RSA key sizes appears dubious. It depends on the kind of algorithm the unknown attack is. For something similar to GNFS attacks, I believe the same algorithm applies equally for a RSA key size of 2048, 2730 and 4096 and that the running time depends mostly on the key size. Other algorithms that could crack RSA, such as some approximation algorithms, does not seem likely to be thwarted by using non-standard RSA key sizes either. I am not a mathematician though.

At the economical or human level, it seems reasonable to say that if you can crack 95% of all keys out there (sizes 1024, 2048, 4096) then that is good enough and cracking the last 5% is just diminishing returns of the investment. Here I am making up the 95% number. Currently, I would guess that more than 95% of all RSA key sizes on the Internet are 1024, 2048 or 4096 though. So this aspect holds as long as people behave as they have done.

The final assumption is that by using non-standard key sizes I raise the bar sufficiently high to make an attack impossible. To be honest, this scenario appears unlikely. However it might increase the cost somewhat, by a factor or two or five. Which might make someone target a lower hanging fruit instead.

Putting my argument together, I have 1) identified some downsides of using non-standard RSA Key sizes and discussed their costs and implications, and 2) mentioned some speculative upsides of using non-standard key sizes. I am not aware of any argument that the odds of my speculation is 0% likely to be true. It appears there is some remote chance, higher than 0%, that my speculation is true. Therefor, my personal conservative approach is to hedge against this unlikely, but still possible, attack scenario by paying the moderate cost to use non-standard RSA key sizes. Of course, the QA engineer in me also likes to break things by not doing what everyone else does, so I end this with an ObXKCD.

Guido Günther: Debian Fun in October 2016

4 November, 2016 - 01:09
Debian LTS

October marked the eighteenth month I contributed to Debian LTS under the Freexian umbrella. I spent 10 hours (out of allocated 9)

  • updating Icedove to 45.4 resulting in DLA-658-1
  • looking into current xen issues and handling the communication with credativ
  • investigating QEMU CVE-2016-7466 in Wheezy and Jessie
  • backporting patches for qemu-kvm to fix 9 CVEs resulting in DLA-689-1
  • starting with lts frontdesk (more on that next month)
Other Debian stuff
  • Carsten and myself had the chance to talk at the Kopano conference about Debian and the state of Kopano in Debian (slides)
  • Uploaded kopanocore to unstable, currently waiting in new
  • Several Libvirt and Libvirt (2.3.0, 2.4.0~rc*) related uploads (libvirt 2.3.0, libvirt-python, ruby-libvirt 0.7.0)
  • Uploaded libosinfo 1.0.0 to experimental. This version has the osinfo database split out into its own source package (osinfo-db, waiting in new) so the operating system and hypervisor information is updateable during a stable release without having to update the library itself
Some other Free Software activities

Norbert Preining: Debian/TeX Live 2016.20161103-1

3 November, 2016 - 20:37

This month’s update falls onto a national holiday in Japan. My recent start as a normal company employee in Japan doesn’t leave me enough time during normal days to work on Debian, so things have to wait for holidays. There have been a few notable changes in the current packages, and above all I wanted to fix an RC bug and on the way fixed also several other (sometimes rather old) bugs.

From the list of new packages I want to pick apxproof: I have written something myself for one of my rather long papers (with proofs about 60pp), where at times I had to factor out the proofs into an appendix. I did this my own way, but I would have preferred to have a nice package!

Another interesting change is the upstream merge of collection-mathextra (which translated to the Debian package texlive-math-extra) and collection-science (Debian: texlive-science) into a new collection collection-mathscience. Since introducing new packages and phasing out old ones is generally a pain in Debian, I decided to digress from the upstream naming convention and use texlive-science for the new collection-mathscience. In the end Mathematics is the most important science of all

Finally also a word about removals: Several ConTeXt packages have been removed due to the fact that they are outdated. These removals will find their way in an update of the Debian ConTeXt package in near future. The TeX Live packages lost voss-mathmode, which was retracted by the author due to various reasons. He is working on an updated version that will hopeful reappear in both TeX Live and Debian in near future.

Well, that’s it for now. Here now the full list with links. Enjoy.

New packages

apxproof, bangorexam, biblatex-gb7714-2015, biblatex-lni, biblatex-sbl, context-cmscbf, context-cmttbf, context-inifile, context-layout, delimset, latex2nemeth, latexbangla, latex-papersize, ling-macros, notex-bst, platex-tools, testidx, uppunctlm, wtref, xcolor-material.

Removed packages

voss-mathmode.

Updated packages

apa6, autoaligne, babel-german, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-manuscripts-philology, biblatex-nature, biblatex-realauthor, bibtex, bidi, boondox, bxcjkjatype, chickenize, churchslavonic, cjk-gs-integrate, context-filter, cooking-units, ctex, denisbdoc, dvips, europasscv, fixme, glossaries, gzt, handout, imakeidx, ipaex-type1, jsclasses, jslectureplanner, kpathsea, l3build, l3experimental, l3kernel, l3packages, latexindent, latexmk, listofitems, luatexja, marginnote, mcf2graph, minted, multirow, nameauth, newpx, newtx, noto, nucleardata, optidef, overlays, pdflatexpicscale, pst-eucl, reledmac, repere, scanpages, semantic-markup, tableaux, tcolorbox, tetex, texlive-scripts, ticket, todonotes, tracklang, tudscr, turabian-formatting, updmap-map, uspace, visualtikz, xassoccnt, xecjk, yathesis.

Jan Wagner: Container Orchestration Thoughts

3 November, 2016 - 19:48

Since some time everybody (read developer) want to run his new microservice stacks in containers. I can understand that building and testing an application is important for developers.
One of the benefits of containers is, that developer (in theory) can put their new version of applications into production on their own. This is the point where operations is affected and operations needs to evaluate, if that might evolve into better workflow.

For yolo^WdevOps people there are some challenges that needs to be solved, or at least mitigated, when things needs to be done in large(r) scale.

  • Which Orchestration Engine should be considered?
  • How to provide persistent (shared) storage?
  • How to update the base image(s) the apps are build upon and to test/deploy them?
Orchestration Engine

Running Docker, which is actual the most preferred container solution, on a single host with docker command line client is something you can do, but there you leave the gap between dev and ops.

UI For Docker

Since some time there is UI For Docker available for visualizing and managing containers on a single docker node. It's pretty awesome and the best feature so far is the Container Network view, which also shows the linked container.

Portainer

Portainer is pretty new and it can be deployed as easy as UI For Docker. But the (first) great advantage: it can handle Docker Swarm. Beside that it has many other great features.

Rancher

Rancher describes themselves as 'container management platform' that 'supports and manages all of your Kubernetes, Mesos, and Swarm clusters'. This is great because this are all of the relevant docker cluster orchestrations at the market actually.

For the use cases, we are facing, Kubernetes and Mesos seems both like bloated beasts. Usman Ismail has written a really good comparison of Orchestration Engine options which goes into details.

Docker Swarm

As there is actually no clear defacto standard/winner of the (container) orchestration wars, I would prevent to be in a vendor lock-in situation (yet). Docker swarm seems to be evolving and is getting more nice features other competitors doesn't provide.
Due the native integration into the docker framework and great community I believe Docker Swarm will be the Docker Orchestration of the choice on the long run. This should be supported by Rancher 1.2 which is not released yet.
From this point of view it looks very reasonable that Docker Swarm in combination with Rancher (1.2) might be a good strategy to maintain your container farms in the future.

If you think to put Docker Swarm into production in the actual state, I recommend to read Docker swarm mode: What to know before going live on production by Panjamapong Sermsawatsri.

Persistent Storage

While it is a best practice to use data volume container these days, providing persistent storage across multiple hosts for shared volumes seems to be tricky.

In theory you can mount a shared-storage volume as a data volume and there are several volume plugins which supports shared storage.

For example you can use the convoy plugin which gives you:

  • thin provisioned volumes
  • snapshots of volumes
  • backup of snapshots
  • restore volumes

As backend you can use:

  • Device Mapper
  • Virtual File System(VFS)/Network File System(NFS)
  • Amazon Elastic Block Store(EBS)

The good thing is, that convoy is integrated into Rancher. For more information I suggest to read Setting Up Shared Volumes with Convoy-NFS, which also mentions some limitations. If you want test Persistent Storage Service, Rancher provides some documentation.

Actually I did not evaluate shared-storage volumes yet, but I don't see a solution I would love to use in production (at least on-premise) without strong downsides. But maybe things will go further and there might be a great solution for this caveats in the future.

Keeping base images up-to-date

Since some time there are many projects that tries to detect security problems in your container images in several ways.
Beside general security considerations you need to deal somehow with issues in your base images that you build your applications on.

Of course, even if you know you have a security issue in your application image, you need to fix it, which depends on the way how you based your application upon.

Ways to base your application image
  • You can build your application image entire from scratch, which leaves all the work to your development team and I wouldn't recommend it that way.
  • You also can create one (or more) intermediate image(s) that will be used by your development team.
  • The development team might ground their work on images in public available or private (for example the one bundled to your gitlab CI/CD solution) registries.
Whats the struggle with the base image?

If you are using images being not (well) maintained by other people, you have to wait for them to fix your base image. Using external images might also lead into trust problems (can you trust those people in general?).
In an ideal world, your developers have always fresh base images with fixed security issues. This can probably be done by rebuilding every intermediate image periodically or when the base image changes.

Paradigm change

Anyway, if you have a new application image available (with no known security issues), you need to deploy it to production. This is summarized by Jason McKay in his article Docker Security: How to Monitor and Patch Containers in the Cloud:

To implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together.

So patching security issues in the container world changes workflow significant. In the old world operation teams mostly rolled security fixes for the base systems independent from development teams.
Now hitting containers the production area this might change things significant.

Bringing updated images to production

Imagine your development team doesn't work steady on a project, cause the product owner consider it feature complete. The base image is provided (in some way) consistently without security issues. The application image is build on top of that automatically on every update of the base image.
How do you push in such a scenario the security fixes to production?

From my point of view you have two choices:

  • Let the development team require to test the resulting application image and put it into production
  • Push the new application image without review by the development team into production

The first scenario might lead into a significant delay until the fixes hit production created by the probably infrequent work of the development team.

The latter one brings your security fixes early to production by the notable higher risk to break your application. This risk can be reduced by implementing massive tests into CI/CD pipelines by the development team. Rolling updates provided by Docker Swarm might also reduce the risk of ending with a broken application.

When you are implementing an update process of your (application) images to production, you should consider Watchtower that provides Automatic Updates for Docker Containers.

Conclusion

Not being a product owner or the operations part of an application that is facing a widely adopted usage that would compensate the actual tradeoffs we are still facing I tend not to move large scale production projects into a container environment.
This means not that this might be a bad idea for others, but I'd like to sort out some of the caveats before.

I'm still interested to put smaller projects into production, being not scared to reimplement or move them on a new stack.
For smaller projects with a small number of hosts Portainer looks not bad as well as Rancher with the Cattle orchestration engine if you just want to manage a couple of nodes.

Things are going to be interesting if Rancher 1.2 supports Docker swarm cluster out of the box. Let's see what the future will bring us to the container world and how to make a great stack out of it.

Bits from Debian: New Debian Developers and Maintainers (September and October 2016)

3 November, 2016 - 18:00

The following contributors got their Debian Developer accounts in the last two months:

  • Arturo Borrero González (arturo)
  • Sandro Knauß (hefee)

The following contributors were added as Debian Maintainers in the last two months:

  • Abhijith PA
  • Mo Zhou
  • Víctor Cuadrado Juan
  • Zygmunt Bazyli Krynicki
  • Robert Haist
  • Sunil Mohan Adapa
  • Elena Grandi
  • Adriano Rafael Gomes
  • Eric Heintzmann
  • Dylan Aïssi
  • Daniel Shahaf
  • Samuel Henrique
  • Kai-Chung Yan
  • Tino Mettler

Congratulations!

Junichi Uekawa: Surprisingly it is already November.

3 November, 2016 - 06:46
Surprisingly it is already November.

Reproducible builds folks: Reproducible Builds: week 79 in Stretch cycle

3 November, 2016 - 02:09

What happened in the Reproducible Builds effort between Sunday October 23 and Saturday October 29 2016:

Upcoming events

The second Reproducible Builds World Summit will be held from December 13th-15th in Berlin! See the link for more details.

Other events:

Introduction to Reproducible Builds - Vagrant Cascadian will be presenting at the SeaGL.org Conference In Seattle, USA on November 12th, 2016.

Reproducible Debian Hackathon - A small hackathon organized in Boston, USA on December 3rd and 4th. If you are interested in attending, contact Valerie Young - spectranaut in the #debian-reproducible IRC channel on irc.oftc.net.

IRC meeting

The next IRC meeting will be held on 2016-11-01 at 18:00 UTC. The meeting after that will be held on 2016-11-15 at 18:00 UTC.

Reproducible work in other projects

Ximin Luo has had his fix to bug 77985 accepted into GCC. This is needed to be able to write test cases for patches to make GCC produce debugging symbols that are reproducible regardless of the build path.

There was continued discussion on the mailing list regarding our build path proposals. It has now been decided to use an environment variable SOURCE_PREFIX_MAP instead of the older proposal SOURCE_ROOT_DIR. This would be similar to GCC's existing -fdebug-prefix-map option, which allows for better disambiguation between paths from different packages.

mandoc's makewhatis is now reproducible. It is used by all the BSDs, including FreeBSD, as well as Alpine Linux and Void Linux.

Packages reviewed and fixed, and bugs filed

Chris Lamb:

Reiner Herrmann:

Reviews of unreproducible packages

145 package reviews have been added, 608 have been updated and 94 have been removed in this week, adding to our knowledge about identified issues.

3 issue types have been updated:

Weekly QA work

During of reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (17)
  • Matthias Klose (2)
tests.reproducible-builds.org

Debian:

  • Valerie improved the SQL code so that the scheduler job again runs within minutes. She did the same to the job updating the notes about known issues, though this job still runs 12min and not 2min as it used to do…
  • Thanks to a patch from Chris, which was improved by dkg and h01ger after discussions on our list, the .buildinfo files submitted to buildinfo.debian.net are now GPG signed by our build nodes.

General:

  • Holger fixed the NetBSD and coreboot jobs which were broken due to work on the LEDE+OpenWRT jobs.
  • As squid on jessie/i386 (but not on jessie/amd64) crashes frequently, we have have monitoring for this and Holger fixed a subtile bug there.
diffoscope development Misc.

This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Sandro Tosi: Debian source package name from the binary name

3 November, 2016 - 01:01
it looks like i forgot all the times how to do that, and apparently i'm not able to use google good enough to find it out quickly, let's write down one way to get the source package name from the binary package name:

dpkg-query -W -f='${source:package}\n' <list of bin pkgs>

(since it accepts a list of packages, you can xargs it).

there are probably another million ways to do that, so dont be shy and comment this post if you want to share your method

Enrico Zini: schroot connector for ansible

2 November, 2016 - 22:54

I accidentally made an ansible connector plugin for schroot. I haven't even used ansible yet, so I have no idea what I am doing.

You can choose the chroot type by setting something like scroot_type: jessie in your variables.

You can install this locally as connection_plugins/schroot.py.

# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# Based on chroot.py (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
# (c) 2016, Enrico Zini <enrico@debian.org>
#
# This is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

import distutils.spawn
import os
import os.path
import pipes
import subprocess
import traceback

from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.module_utils.basic import is_executable
from ansible.utils.unicode import to_bytes

try:
    from __main__ import display
except ImportError:
    from ansible.utils.display import Display
    display = Display()


class Connection(ConnectionBase):
    ''' Local chroot based connections '''

    transport = 'schroot'
    has_pipelining = True
    # su currently has an undiagnosed issue with calculating the file
    # checksums (so copy, for instance, doesn't work right)
    # Have to look into that before re-enabling this
    become_methods = frozenset(C.BECOME_METHODS).difference(('su',))

    def __init__(self, play_context, new_stdin, *args, **kwargs):
        super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)

        self.chroot = self._play_context.remote_addr

        #if os.geteuid() != 0:
            #raise AnsibleError("chroot connection requires running as root")

        # we're running as root on the local system so do some
        # trivial checks for ensuring 'host' is actually a chroot'able dir
        #if not os.path.isdir(self.chroot):
        #    raise AnsibleError("%s is not a directory" % self.chroot)

        #chrootsh = os.path.join(self.chroot, 'bin/sh')
        ## Want to check for a usable bourne shell inside the chroot.
        ## is_executable() == True is sufficient.  For symlinks it
        ## gets really complicated really fast.  So we punt on finding that
        ## out.  As long as it's a symlink we assume that it will work
        #if not (is_executable(chrootsh) or (os.path.lexists(chrootsh) and os.path.islink(chrootsh))):
        #    raise AnsibleError("%s does not look like a chrootable dir (/bin/sh missing)" % self.chroot)

        self.chroot_cmd = distutils.spawn.find_executable('schroot')
        if not self.chroot_cmd:
            raise AnsibleError("chroot command not found in PATH")

        self.chroot_id = "session:" + self.chroot
        self.chroot_type = "stable"
        existing = subprocess.check_output([self.chroot_cmd, "--list", "--all"])
        self.chroot_exists = False
        for line in existing.splitlines():
            if line == self.chroot_id:
                self.chroot_exists = True

    def set_host_overrides(self, host, hostvars=None):
        super(Connection, self).set_host_overrides(host, hostvars)
        self.chroot_type = hostvars.get("schroot_type", self.chroot_type)

    def _connect(self):
        ''' connect to the chroot; nothing to do here '''
        super(Connection, self)._connect()
        if not self._connected:
            if not self.chroot_exists:
                self.chroot_id = subprocess.check_output([self.chroot_cmd, "-b", "-c", self.chroot_type, "-n", self.chroot]).strip()
                subprocess.check_call([self.chroot_cmd, "-r", "-c", self.chroot_id, "-u", "root", "--", "apt-get", "update"])
                subprocess.check_call([self.chroot_cmd, "-r", "-c", self.chroot_id, "-u", "root", "--", "apt-get", "-y", "install", "python"])

            display.vvv("THIS IS A LOCAL CHROOT DIR", host=self.chroot)
            self._connected = True

    def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
        ''' run a command on the chroot.  This is only needed for implementing
        put_file() get_file() so that we don't have to read the whole file
        into memory.

        compared to exec_command() it looses some niceties like being able to
        return the process's exit code immediately.
        '''
        executable = C.DEFAULT_EXECUTABLE.split()[0] if C.DEFAULT_EXECUTABLE else '/bin/sh'
        local_cmd = [self.chroot_cmd, "-r", "-c", self.chroot_id, "-u", "root", "--", executable, '-c', cmd]

        display.vvv("EXEC %s" % (local_cmd), host=self.chroot)
        local_cmd = [to_bytes(i, errors='strict') for i in local_cmd]
        p = subprocess.Popen(local_cmd, shell=False, stdin=stdin,
                stdout=subprocess.PIPE, stderr=subprocess.PIPE)

        return p

    def exec_command(self, cmd, in_data=None, sudoable=False):
        ''' run a command on the chroot '''
        super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)

        p = self._buffered_exec_command(cmd)

        stdout, stderr = p.communicate(in_data)
        return (p.returncode, stdout, stderr)

    def _prefix_login_path(self, remote_path):
        ''' Make sure that we put files into a standard path

            If a path is relative, then we need to choose where to put it.
            ssh chooses $HOME but we aren't guaranteed that a home dir will
            exist in any given chroot.  So for now we're choosing "/" instead.
            This also happens to be the former default.

            Can revisit using $HOME instead if it's a problem
        '''
        if not remote_path.startswith(os.path.sep):
            remote_path = os.path.join(os.path.sep, remote_path)
        return os.path.normpath(remote_path)

    def put_file(self, in_path, out_path):
        ''' transfer a file from local to chroot '''
        super(Connection, self).put_file(in_path, out_path)
        display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.chroot)

        out_path = pipes.quote(self._prefix_login_path(out_path))
        try:
            with open(to_bytes(in_path, errors='strict'), 'rb') as in_file:
                try:
                    p = self._buffered_exec_command('dd of=%s bs=%s' % (out_path, BUFSIZE), stdin=in_file)
                except OSError:
                    raise AnsibleError("chroot connection requires dd command in the chroot")
                try:
                    stdout, stderr = p.communicate()
                except:
                    traceback.print_exc()
                    raise AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
                if p.returncode != 0:
                    raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
        except IOError:
            raise AnsibleError("file or module does not exist at: %s" % in_path)

    def fetch_file(self, in_path, out_path):
        ''' fetch a file from chroot to local '''
        super(Connection, self).fetch_file(in_path, out_path)
        display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.chroot)

        in_path = pipes.quote(self._prefix_login_path(in_path))
        try:
            p = self._buffered_exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE))
        except OSError:
            raise AnsibleError("chroot connection requires dd command in the chroot")

        with open(to_bytes(out_path, errors='strict'), 'wb+') as out_file:
            try:
                chunk = p.stdout.read(BUFSIZE)
                while chunk:
                    out_file.write(chunk)
                    chunk = p.stdout.read(BUFSIZE)
            except:
                traceback.print_exc()
                raise AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
            stdout, stderr = p.communicate()
            if p.returncode != 0:
                raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))

    def close(self):
        ''' terminate the connection; nothing to do here '''
        super(Connection, self).close()
        #subprocess.check_command([self.chroot_cmd, "-e", "-c", self.chroot_id])
        self._connected = False

Markus Koschany: My Free Software Activities in October 2016

2 November, 2016 - 19:04

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Android, Java, Games and LTS topics, this might be interesting for you.

Debian Android Debian Games
  • I fixed RC bugs in lordsawar (#839323) and doomsday (#839338).
  • I packaged new upstream releases of atanks, lordsawar, blockattack and peg-e.
  • I completed the Bullet transition (#839243). Bullet 2.85 has also been released this month but it is now too late for Stretch because the transition freeze is already on the 5th of November. I expect more point releases a la 2.85.x during the coming weeks and I intend to provide an updated package in experimental soon.
  • I did some cleanups, package upgrades and bug fixes for box2d and redeclipse (apparently redeclipse-server requires the -data package to be present now).
  • I uploaded Redeclipse 1.5.6 to jessie-backports in the hope that more players will be able to connect to the multiplayer servers. Unfortunately network compatibility breaks rather frequently.
  • I applied a patch from Gianfranco Costamagna to address an Multiarch installation issue (#841824) in FreeOrion.
Debian Java Debian LTS

This was my eight month as a paid contributor and I have been paid to work 13 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 10. October until 17. October I was in charge of our LTS frontdesk. I triaged bugs in libgd2, graphicsmagick, libxrender, mupdf, libxfixes, guile-2.0, glance, inspircd, libxi, libxv, libxst, spip, libxml2, libarchive and jasper.
  • DLA-648-1. Issued a security update for c-ares fixing 1 CVE.
  • DLA-664-1. Issued a security update for libxrender fixing 2 CVE.
  • DLA-666-1. Issued a security update for guile-2.0 fixing 2 CVE.
  • DLA-667-1. Issued a security update for libxv fixing 1 CVE.
  • DLA-668-1. Issued a security update for libass fixing 2 CVE. I triaged CVE-2016-7970 and marked the version in Wheezy as not affected.
  • DLA-673-1. Issued a security update for kdepimlibs fixing 1 CVE.
Non-maintainer uploads
  • I fixed various RC bugs in gnudoq and xsok which are not maintained by the Games Team. The following games are available in Stretch again: gnudoq (#817296, #817484), xsok (#817738) and I also worked on four more bug fixes to improve the game’s desktop integration and internationalization support.
  • I fixed another RC bug in trackballs (#831119) but while I was working on the update I discovered that the game frequently segfaults which makes it kind of unplayable (#839788). I haven’t found a solution yet but I suspect the switch to guile-2.0 and related patches introduced this behavior.
QA
  • I uploaded a new revision of criticalmass and applied a patch from Adrian Bunk to fix #811816, a FTBFS.
  • I triaged an RC bug for raptor2 (#824735) and the issue could be closed after the bug reporter confirmed that raptor2 built fine again.

Rapha&#235;l Hertzog: My Free Software Activities in October 2016

2 November, 2016 - 18:09

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

Last month I started to work on tiff3 but had not enough time to complete an update, it turns out the issues were hairy enough that nobody else picked up the package. So this month I started again with tiff3 and tiff and I ended up spending my 13h on those two packages.

I filed bugs for issues that were not yet reported to the BTS (#842361 for CVE-2016-5652, #842046 for CVE-2016-5319/CVE-2016-3633/CVE-2015-8668). I marked many CVE as not affecting tiff3 as this source package does not ship the tools (the “tiff” source package does).

Since upstream decided to drop many tools instead of fixing the corresponding security issues, I opted to remove the tools as well. Before doing this, I looked up reverse dependencies of libtiff-tools to ensure that none of the tools removed are used by other packages (the maintainer seems to agree too).

I backported upstream patches for CVE-2016-6223 and CVE-2016-5652.

But the bulk of the time, I spent on CVE-2014-8128, CVE-2015-7554 and CVE-2016-5318. I believe they are all variants of the same problem and upstream seems to agree since he opened a sort of meta-bug to track them. I took inspiration from a patch suggested in ticket #2499 and generalized it a bit by trying to add the tag data for all tags manipulated by the various tools. It was a tiresome process as there are many tags used in multiple places. But in the end, it works as expected. I can no longer reproduce any of the segfaults with the problematic files.

I asked for review/test on the mailing list but did not get much feedback. I’m going to upload the updated packages soon.

Distro Tracker

I noticed a sudden raise in the number of email addresses being automatically unsubscribed from the Debian Package Tracker and I got a few request of bounces. It turns out the BTS has been relaying lots of spam with executables files and those are bounced by Google (and not silently discarded). This is all very unfortunate… the spam flood is unlikely to stop soon and I can’t expect Google to change either, so I had little choice except trying to make the bounce handler smarter. That’s what I did: I have a list of regular expression that will discard a bounce. In other words, once matched the bounce won’t count towards the limit that triggers the automatic unsubscription.

Misc Debian work

Bugs filed. In #839403, I suggest the possibility to set the default pin priority for a source in the sources.list file directly. In #840436 I ask the selenium-firefoxdriver maintainer to do what is required to get this non-free package auto-built.

Packaging. I sponsored puppet-lint 2.0.2-0.1 and I reviewed the rozofs package (wihch I just sponsored into experimental for a start).

Publicity. I’m maintaining the Debian account on Twitter and Facebook. I have been using twitterfeed.com up to now but it’s closing down. I followed their recommendations and switched to dlvr.it to automatically post entries out of the micronews.debian.org feed. In #841165, I reported that the chroots created by sbuild-createchroot are lacking the usual IPv6 entries created by netbase. In #841503, I report a very common cryptsetup upgrade failure that I saw multiple times (both in Debian and in Kali).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Jonathan Wiltshire: Reflecting on a year of regular, public IRC meetings

2 November, 2016 - 05:54

The release team first started holding a regular, public planning and status meeting a little over a year ago, in September 2015. At that time, FTP masters had experimented along similar lines and I took some inspiration from that, including the keeping of proper minutes that anyone can look at. I wanted to open up our discussion processes and allow other developers and users to see (and perhaps influence) our plans for the release taking shape month by month, and how we reached certain decisions with a lot of mature discussion and not just on a whim.

A secondary aim, since we are quite geographically distributed and getting together for same-room meetings is hard, was to bring more accountability to ourselves when we decided something ought to happen; if it’s in the minutes, there’s no escaping someone asking “so what happened to … ?”. That’s worked better for us on some topics than on others.

Finally, public minutes mean that anyone who might be interested in joining the team can see easily what we’re up to and how we shape the release throughout the cycle. That might help lower the barrier to entry, which can only be good for the team.

I had hoped that regular meetings would inspire other teams to do similar; I haven’t seen any indication of that to date (though perhaps it’s just down to awareness). The Reproducible Builds contributors held fortnightly meetings for a period in 2015, though not inspired by ours, and I heard recent talk of starting those again. I still think that there is plenty of scope to improve the transparency of core teams in general in Debian, but also that regular meetings aren’t going to work for every team.

A regular slot which is not varied except when absolutely necessary, is essential for avoiding the temptation to just push it back another week when things are busy. In our office we have an allegedly-regular Thursday afternoon slot for technical demonstrations, which has suffered from exactly that problem for a long time now, and I wanted to avoid that issue. We have a calendar to remind us when each meeting is due, along with other important events like freeze milestones. Our slot is the fourth Wednesday of the month, a fairly arbitrary choice which seems to have worked out quite well.

Time zones are more of an issue, even within Europe. We have mostly used a European evening time, but that’s not very helpful further West where it’s in the middle of the working day, or the middle of the night if you’re further East (that one fortunately isn’t an issue for us so far). Even within Europe it’s difficult, as we have to try and balance commuting time in the UK with dinner on the continent, or dinner with late evening, or adjust for saving changes, or… you get the idea. If we gained a far-eastern team member one day, this would be a real issue.

We use Meetbot for recording the minutes. I have heard criticism that it publicly archives IRC logs to the web essentially forever, but for us that’s the whole point. With a little practice and discipline it does generate really nice minutes, with a bullet summary of the important parts, a summary of actions agreed and a log of the conversation for detailed reference. Anybody reading them can see how we reached a conclusion, and I’m of the view that goes some way to avoiding a reputation for cabal-ism. It does pay to use the #info, #agree and #action tags liberally, but other things are slightly unnatural – like always remembering to use a URL at the beginning of a line and not in the middle of a sentence, or Meetbot doesn’t notice it. Practice goes a long way.

I’ve naturally fallen into chairing most meetings, for better or worse – the consistency seems beneficial, but I worry that I’m dominating the discussion sometimes. Discipline in making sure everybody has been included is something I’ve had to get better at. It’s essential to have a public agenda and to stick to it, and it should include some stock items at the start and end (including making sure the URL to the previous minutes has been given, reviewing outstanding actions, and arranging the next meeting before ending the current one). There is some skill in judging the agenda length and deciding which items can be deferred to make sure it doesn’t drag on too late – we’ve found anything more than an hour is far too long, and between 45 and 60 minutes is pushing it. Getting some easy topics out of the way before starting one which is more contentious can be helpful to avoid having to defer them later. I circulate the URL to the minutes and the date of the next meeting publicly on the mailing list immediately after each meeting, or as soon as possible.

With little feedback, I have no idea if our meetings are helpful to those outside the team or not. We do still hold in-person meetings from time to time when we’re all together, because they’re useful for some circumstances (like some genuinely private topics we occasionally need to discuss, or for sprinting). I would hope that public meetings inspire confidence that we’re on top of the release process, that they show we have a mature and transparent decision making process (for example, in deciding to move the freeze date to accommodate an external release schedule as a one-off, and subsequently deciding to not move it back when circumstances changed), and – mostly – that other teams might benefit for the same reasons. But I can also see that they make more sense in a team with a defined project cycle than they might in one which is more administrative or where work is more sporadic (no point holding a meeting for the sake of it, after all).

Thorsten Alteholz: My Debian Activities in October 2016

2 November, 2016 - 04:17

FTP assistant

This month I caught up from last month and marked 317 packages for accept and rejected 23 packages. I also sent 5 emails to maintainers asking questions.

Debian LTS

This was my twenty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 13h. During that time I did uploads of

  • [DLA 645-1] bind9 security update
  • [DLA 646-1] zendframework security update
  • [DLA 665-1] libgd2 security update
  • [DLA 671-1] libxvmc security update
  • [DLA 672-1] bind9 security update
  • [DLA 691-1] libxml2 security update

The second upload of bind was an embargoed one.

Other stuff

I uploaded a new version of greylistd and fixed RC bug #837501. A new version of highlight.js fixed RC bug #830189. With a new upstream version of chktex I could close bugs #782342, #782343 and #819885. I also uploaded the new package node-random-bytes and new upstream versions of alljoyn-core-1604 and duktape

Finally, after about 4 years, I managed to upload entropybroker and instantly had to deal with #840018, #840019 and #840020. One cannot overemphasize the importance of our QA stuff!

I also uploaded a new version of libctl to solve the -fPIC issue but was asked short time after to revert that again :-(.

As already mentioned some days ago I adopted libmatthew-java. At that time about 956 package were orphaned and I asked everybody to adopt one of these packages. Unfortunately now there are 982 package orphaned. I guess I have to clear up a misunderstanding. You should adopt those packages and not oprhan more of them!

Simon Richter: Using the Arduino IDE with a tiling window manager

1 November, 2016 - 23:16

The Arduino IDE does not work properly with tiling window managers, because they do some interesting reparenting.

To solve this, add

_JAVA_AWT_WM_NONREPARENTING=1
export _JAVA_AWT_WM_NONREPARENTING

to the start script or your environment.

Credit: "Joost"

Jaldhar Vyas: Sal Mubarak 2073!

1 November, 2016 - 12:43

Wishing every one a happy Gujarati New Year, Vikram Samvat 2073 named Kilaka and hoping the upcoming year will be yuuge for you.

These next couple of paragraphs are totally not an excuse for why it will take a few more days for me to reach seven blog posts.

Reading reports about Diwali in the American press, I see a bit of confusion whether Diwali is one day or five. Well, technically it is just one (Sunday 30th this year.) but there are a number of other observances around it which could be classed as subsidiaries if you want to look at it that way.

The season commenced last Wednesday with Rama Ekadashi. (where the Gujarati name is different I'll use that and put the Sanskrit name in parentheses.) That's a fast day and therefore not much fun.

Thursday was Vagh Barash (vyaghra dvadashi) which as the name suggests is something to do with tigers but we don't in my experience particularly do anything special that day.

Friday, things began in earnest with Dhan Terash (dhana trayodashi) when Lakshmi the Goddess of prosperity is worshipped. It is also a good day to buy gold.

Saturday was Kali Chaudash (Kali Chaturdashi or Naraka Chaturdashi) On this day many Gujarati families including mine worship their Kuladevi (patron Goddess of the family) even if She is not an aspect of Kali. (Others observe this on the Ashtami of Navaratri.) The day is also associated with the God Hanuman. Some people say it is His Jayanti (birthday) though we observe it in Chaitra (March-April.) It is also the best day for learning mantras and I initiated a couple of people including my son into a mantra I know.

Sunday was Diwali (Deepavali) proper. As a Brahmana I spent much of the day signing blessings in the account books of shopkeepers. Well, nowadays only a few old people have actual account books so usually the print out a spreadsheet and I sign that. But home is where the main action is. Lights are lit, fireworks are set off, and prayers are offered to Lakshmi. But most important of all, this is the day good boys and girls get presents. Unfortunately I have nothing interesting to report; just the usual utilitarian items of clothing. Fireworks by the way are technically illegal in New Jersey not that that ever stopped anyone from getting them. The past few years, Jersey City has attempted to compromise by allowing a big public fireworks display. Although it was nice and sunny all day, by nighttime we had torrential rain and the firework display got washed out. So I'm glad I rebelled against the system with my small cache of sparklers.

Today (or yesterday by the time this gets posted.) was the Gujarati New Years Day. There is also the commemoration of the time the God Krishna lifted up Mt Govardhan with one finger which every mandir emulates by making an annakuta or mountain of food.

Tuesday is Bhai Beeja (Yama Dvitiya in Sanskrit or Bhai Duj in Hindi) when sisters cook a meal for their brothers. My son is also going to make something (read: microwave something) for his sister.

So those are the five days of Diwali. Though many will not consider it to be truly over until this Saturday, the lucky day of Labh Pancham (Labha panchami.) And if I still haven't managed to write seven blog posts by then, there is always Deva Diwali...

Steve McIntyre: Twenty years...

1 November, 2016 - 06:17

So, it's now been twenty years since I became a Debian Developer. I couldn't remember the exact date I signed up, but I decided to do some forensics to find out. First, I can check on the dates on my first Debian system, as I've kept it running as a Debian system ever since!

jack:~$ ls -alt /etc
...
-rw-r--r--   1 root   root     6932 Feb 10  1997 pine.conf.old
-rw-r--r--   1 root   root     6907 Dec 29  1996 pine.conf.old2
-rw-r--r--   1 root   root    76739 Dec  7  1996 mailcap.old
-rw-r--r--   1 root   root     1225 Oct 20  1996 fstab.old
jack:~$

I know that I did my first Debian installation in late October 1996, migrating over from my existing Slackware installation with the help of my friend Jon who was already a DD. That took an entire weekend and it was painful, so much so that several times that weekend I very nearly bailed and went back. But, I stuck with it and after a few more days I decided I was happier with Debian than with the broken old Slackware system I'd been using. That last file (fstab.old) is the old fstab file from the Slackware system, backed up just before I made the switch.

I was already a software developer at the time, so of course the first thing I wanted to do once I was happy with Debian was to become a DD and take over the Debian maintenance of mikmod, the module player I was working on at the time. So, I mailed Bruce to ask for an account (there was none of this NM concept back then!) and I think he replied the next day. Unfortunately, I don't have the email in my archives any more due to a disk crash back in the dim and distant past. But I can see that the first PGP key I generated for the sake of joining Debian dates from October 30th 1996 which gives me a date of 31st October 1996 for joining Debian.

Twenty years, wow... Since then, I've done lots in the project. I'm lucky enough to been to 11 DebConfs, hosted all around the world. I'm massively proud to have been voted DPL for two of those twenty years. I've worked on a huge number of different things in Debian, from the audio applications I started with to the installer (yay, how things come back to bite you!), from low-level CD and DVD tools (and making our CD images!) to a wiki engine written in python. I've worked hard to help make the best Operating System on the planet, both for my own sake and the sake of our users.

Debian has been both excellent fun and occasionally a huge cause of stress in my life for the last 20 years, but despite the latter I wouldn't go back and change anything. Why? Through Debian, I've made some great friends: in Cambridge, in the UK, in Europe, on every continent. Thanks to you all, and here's to (hopefully) many years to come!

Jonas Meurer: debian lts report 2016.10

1 November, 2016 - 05:25
Debian LTS Report for October 2016

October 2016 was my second month as a payed Debian LTS Team member. I was allocated 12 hours and spent 10,25 hours of them as follows:

Links

Enrico Zini: Links for November 2016

1 November, 2016 - 05:00
So You Want To Learn Physics... [archive]
«This post is a condensed version of what I've sent to people who have contacted me over the years, outlining what everyone needs to learn in order to really understand physics.»
Operation Tamarisk [archive]
«Operation Tamarisk was a Cold War-era operation run by the military intelligence services of the U.S., U.K., and France through their military liaison missions in East Germany, that gathered discarded paper, letters, and garbage from Soviet trash bins and military maneuvers, including used toilet paper.»
Mortara case
Of how in Bologna, where I live, in the 1850s/1860s, when my grand-grand-granddad lived, the Papal State seized a child from a Jewish family «on the basis of a former servant's testimony that she had administered emergency baptism to the boy when he fell sick as an infant.»

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้