Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 1 hour 17 min ago

Arturo Borrero González: Netfilter workshop 2019 Malaga summary

11 July, 2019 - 19:00

This week we had the annual Netfilter Workshop. This time the venue was in Malaga (Spain). We had the hotel right in the Malaga downtown and the meeting room was in University ETSII Malaga. We had plenty of talks, sessions, discussions and debates, and I will try to summarice in this post what it was about.

Florian Westphal, Linux kernel hacker, Netfilter coreteam member and engineer from Red Hat, started with a talk related the some works being done in the core of the Netfilter code in the kernel to convert packet processing to lists. He shared an overview of current problems and challenges. Processing in a list rather than per packet seems to have several benefits: code can be smarter and faster, so this seems like a good improvement. On the other hand, Florian thinks some of the pain to refactor all the code may not worth it. Other approaches may be considered to introduce even more fast forwarding paths (apart from the flow table mechanisms for example which is already available).

Florian also followed up with the next topic: testing. We are starting to have a lot of duplicated code to do testing. Suggestion by Pablo is to introduce some dedicated tools to ease in maintenance and testing itself. Special mentions to nfqueue and tproxy, 2 mechanisms that requires quite a bit of code to be well tested (and could be hard to setup anyway).

Ahmed Abdelsalam, engineer from Cisco, gave a talk on SRv6 Network programming. This protocol allows to simplify some interesting use cases from the network engineering point of view. For example, SRv6 aims to eliminate some tunneling and overlay protocols (VXLAN and friends), and increase native multi-tenancy support in IPv6 networks. Network Services Chaining is one of the main uses cases, which is really interesting in cloud environments. He mentioned that some Kubernetes CNI mechanisms are going to implement SRv6 soon. This protocol does not looks interesting only for the cloud use cases, but also from the general network engineering point of view. By the way, Ahmed shared some really interesting numbers and graphs regarding global IPv6 adoption. Ahmed shared the work that has been done in Linux in general and in nftables in particular to support such setups. I had the opportunity to talk more personally with Ahmed during the workshop to learn more about this mechanism and the different use cases and applications it has.

Fernando, GSoC student, gave us an overview of the OSF functionality of nftables to identify different operating systems from inside the ruleset. He shared some of the improvements he has been working on, and some of them are great, like version matching and wildcards.

Brett, engineer from Untangle, shared some plans to use a new nftables expression (dict) to arbitrarily match on metadata. The proposed design is interesting because it covers some use cases from new perspectives. This triggered a debate on different design approaches and solutions to the issues presented.

Next day, Pablo Neira, head of the Netfilter project, started by opening a debate about extra features for nftables, like the ones provided via xtables-addons for iptables. The first we evaluated was GeoIP. I suggested having some generic infrastructure to be able to query external metadata from nftables, given we have more and more use cases looking for this (OSF, the dict expression, GeoIP). Other exotics extension were discussed, like TARPIT, DELUDE, DHCPMAC, DNETMAP, ECHO, Fuzzy, gradm, iface, ipp2p, etc.

A talk on connection tracking support for the linux bridge followed, led by Pablo. A status update on latest work was shared, and some debate happened regarding different use cases for ebtables & nftables.

Next topic was a complex one with no easy solutions: hosting of the Netfilter project infrastructure: git repositories, mailing lists, web pages, wiki deployments, bugzilla, etc. Right now the project has a couple of physical servers housed in a datacenter in Seville. But nobody has time to properly maintain them, upgrade them, and such. Also, part of our infra is getting old, for example the webpage. Some other stuff is outdated, like project twitter accounts. Nobody actually has time to keep things updated, and this is probably the base problem. Many options were considered, including moving to github, gitlab, or other hosting providers.

After lunch, Pablo followed up with a status update on hardware flow offload capabilities for nftables. He started with an overview of the current status of ethtool_rx and tc offloads, capabilities and limitations. It should be possible for most commodity hardware to support some variable amount of offload capabilities, but apparently the code was not in very good shape. The new flow block API should improve this situation, while also giving support for nftables offload. Related article in LWN: https://lwn.net/Articles/793080/

Next talk was by Phil, engineer at Red Hat. He commented on user-defined strings in nftables, which presents some challenges. Some debate happened, mostly to get to an agreement on how to proceed.

Next day, Phil was the one to continue with the workshop talks. This time the talk was about sharing his TODO list for iptables-nft, presentation and discussion of planned work. This triggered a discussion on how to handle certain bugs in Debian Buster, which have a lot of patch dependencies (so we cannot simply cherry-pick a single patch for stable). It turns out I maintain most of the Debian Netfilter packages, and Sebastian Delafond was attending the workshop too, who is also a Debian Developer. We provided some Debian-side input on how to better proceed with fixes for specific bugs in Debian. Phil continued pointing out several improvements that we require in nftables in order to support some rather exotic uses cases in both iptables-nft and ebtables-nft.

Yi-Hung Wei, engineer working in OpenVSwitch shared some intresting features related to using the conntrack engine in certain scenarios. OVS is really useful in cloud environments. Specifically, the open discussion was around the zone based timeout policy support for other Netfilter use cases. It was pointed out by Pablo that nftables already support this. By the way, the Wikimedia Cloud Services team plans to use it in the near future by means of Neutron.

Phil gave another talk related to nftables undefined behaviour situations. He has been working lately in polishing the last gaps between -legacy and -nft flavors of iptables and friends. Mostly what we have yet to solve are some corner cases. Also some weird ICMP situation. Thanks to Phil for taking care of this.

Stephen, engineer from secunet, followed up after lunch to give bring up a couple of topics about improvements to the kernel datapath using XDP. Also, he commented on partitioning the system into control and dataplace CPUs. The nftables flow table infra is doing exactly this, as pointed out by Florian.

Florian continued with some open for discussion topics for pending features in nftables. It looks like every day we have more requests for more different setups and use cases with nftables, so we need to have define uses cases as well as possible, and also try to avoid reinventing the wheel for some stuff.

Laura, engineer from Zevenet, followed up with a really interesting talk on load balancing and clustering using nftables. The amount of features and improvements added to nftlb since the last year is amazin: stateless DNAT topologies, L7 helpers support, more topologies for virtual services and backends, improvements for affinities, security policies, diffrerent clustering architectures, etc. We had an interesting conversation about how we integrate with etcd in the Wikimedia Foundation for sharing information between load balancer and for pooling/depooling backends. They are also spearheading a proposal to include support for nftables into Kubernetes kube-proxy.

Abdessamad El Abbassi, also engineer from Zevenet, shared the project that this company is developing to create a nft-based L7 proxy capable of offloading. They showed some metrics in which this new L7 proxy outperforms HAproxy for some setups. Quite interesting. Also some debate happened around SSL termination and how to better handle that situation.

That very afternoon the core team of the Netfilter project had a meeting in which some internal topics were discussed.

I really enjoyed this round of Netfilter workshop. Pretty much enjoyed the time with all the folks, old friends and new ones.

Wouter Verhelst: DebConf Video player

11 July, 2019 - 17:14

Last weekend, I sat down to learn a bit more about angular, a TypeScript-based programming environment for rich client webapps. According to their website, "TypeScript is a typed superset of JavaScript that compiles to plain JavaScript", which makes the programming environment slightly more easy to work with. Additionally, since TypeScript compiles to whatever subset of JavaScript that you want to target, it compiles to something that should work on almost every browser (that is, if it doesn't, in most cases the fix is to just tweak the compatibility settings a bit).

Since I think learning about a new environment is best done by actually writing a project that uses it, and since I think it was something we could really use, I wrote a video player for the DebConf video team. It makes use of the metadata archive that Stefano Rivera has been working on the last few years (or so). It's not quite ready yet (notably, I need to add routing so you can deep-link to a particular video), but I think it's gotten to a state where it is useful for more general consumption.

We'll see where this gets us...

Steve Kemp: Building a computer - part 1

11 July, 2019 - 17:01

I've been tinkering with hardware for a couple of years now, most of this is trivial stuff if I'm honest, for example:

  • Wiring a display to a WiFi-enabled ESP8266 device.
    • Making it fetch data over the internet and display it.
  • Hooking up a temperature/humidity sensor to a device.
    • Submit readings to an MQ bus.

Off-hand I think the most complex projects I've built have been complex in terms of software. For example I recently hooked up a 933Mhz radio-receiver to an ESP8266 device, then had to reverse engineer the protocol of the device I wanted to listen for. I recorded a radio-burst using an SDR dongle on my laptop, broke the transmission into 1 and 0 manually, worked out the payload and then ported that code to the ESP8266 device.

Anyway I've decided I should do something more complex, I should build "a computer". Going old-school I'm going to stick to what I know best the Z80 microprocessor. I started programming as a child with a ZX Spectrum which is built around a Z80.

Initially I started with BASIC, later I moved on to assembly language mostly because I wanted to hack games for infinite lives. I suspect the reason I don't play video-games so often these days is because I'm just not very good without cheating ;)

Anyway the Z80 is a reasonably simple processor, available in a 40PIN DIP format. There are the obvious connectors for power, ground, and a clock-source to make the thing tick. After that there are pins for the address-bus, and pins for the data-bus. Wiring up a standalone Z80 seems to be pretty trivial.

Of course making the processor "go" doesn't really give you much. You can wire it up, turn on the power, and barring explosions what do you have? A processor executing NOP instructions with no way to prove it is alive.

So to make a computer I need to interface with it. There are two obvious things that are required:

  • The ability to get your code on the thing.
    • i.e. It needs to read from memory.
  • The ability to read/write externally.
    • i.e. Light an LED, or scan for keyboard input.

I'm going to keep things basic at the moment, no pun intended. Because I have no RAM, because I have no ROM, because I have no keyboard I'm going to .. fake it.

The Z80 has 40 pins, of which I reckon we need to cable up over half. Only the arduino mega has enough pins for that, but I think if I use a Mega I can wire it to the Z80 then use the Arduino to drive it:

  • That means the Arduino will generate a clock-signal to make the Z80 tick.
  • The arduino will monitor the address-bus
    • When the Z80 makes a request to read the RAM at address 0x0000 it will return something from its memory.
    • When the Z80 makes a request to write to the RAM at address 0xffff it will store it away in its memory.
  • Similarly I can monitor for requests for I/O and fake that.

In short the Arduino will run a sketch with a 1024 byte array, which the Z80 will believe is its memory. Via the serial console I can read/write to that RAM, or have the contents hardcoded.

I thought I was being creative with this approach, but it seems like it has been done before, numerous times. For example:

  • http://baltazarstudios.com/arduino-zilog-z80/
  • https://forum.arduino.cc/index.php?topic=60739.0
  • https://retrocomputing.stackexchange.com/questions/2070/wiring-a-zilog-z80

Anyway I've ordered a bunch of Z80 chips, and an Arduino mega (since I own only one Arduino, I moved on to ESP8266 devices pretty quickly), so once it arrives I'll document the process further.

Once it works I'll need to slowly remove the arduino stuff - I guess I'll start by trying to build an external RAM/ROM interface, or an external I/O circuit. But basically:

  • Hook the Z80 up to the Arduino such that I can run my own code.
  • Then replace the arduino over time with standalone stuff.

The end result? I guess I have no illusions I can connect a full-sized keyboard to the chip, and drive a TV. But I bet I could wire up four buttons and an LCD panel. That should be enough to program a game of Tetris in Z80 assembly, and declare success. Something like that anyway :)

Expect part two to appear after my order of parts arrives from China.

Sven Hoexter: Frankenstein JVM with flavour - jlink your own JVM with OpenJDK 11

10 July, 2019 - 21:31

While you can find a lot of information regarding the Java "Project Jigsaw", I could not really find a good example on "assembling" your own JVM. So I took a few minutes to figure that out. My usecase here is that someone would like to use Instana (non free tracing solution) which requires the java.instrument and jdk.attach module to be available. From an operations perspektive we do not want to ship the whole JDK in our production Docker Images, so we've to ship a modified JVM. Currently we base our images on the builds provided by AdoptOpenJDK.net, so my examples are based on those builds. You can just download and untar them to any directory to follow along.

You can check the available modules of your JVM by runnning:

$ jdk-11.0.3+7-jre/bin/java --list-modules | grep -E '(instrument|attach)'
java.instrument@11.0.3

As you can see only the java.instrument module is available. So let's assemble a custom JVM which includes all the modules provided by the default AdoptOpenJDK.net JRE builds and the missing jdk.attach module:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.attach --output myjvm

$ ./myjvm/bin/java --list-modules | grep -E '(instrument|attach)'
java.instrument@11.0.3
jdk.attach@11.0.3

Size wise the increase is, as expected, rather minimal:

$ du -hs myjvm jdk-11.0.3+7-jre jdk-11.0.3+7
141M    myjvm
121M    jdk-11.0.3+7-jre
310M    jdk-11.0.3+7

For the fun of it you could also add the compiler so you can execute source files directly:

$ jdk-11.0.3+7/bin/jlink --module-path jdk-11.0.3+7 --add-modules $(jdk-11.0.3+7-jre/bin/java --list-modules|cut -d'@' -f 1|tr '\n' ',')jdk.compiler --output myjvm2
$ ./myjvm2/bin/java HelloWorld.java
Hello World!

Jonathan Dowland: Bose on-ear wireless headphones

10 July, 2019 - 16:27

Azoychka modelling the headphones

Earlier this year, and after about five years, I've had to accept that my beloved AKG K451 fold-able headphones have finally died, despite the best efforts of a friendly colleague in the Newcastle Red Hat office, who had replaced and re-soldered all the wiring through the headband, and disassembled the left ear-cup to remove a stray metal ring that got jammed in the jack, most likely snapped from one of several headphone wires I'd gone through.

The K451's were really good phones. They didn't sound quite as good as my much larger, much less portable Sennheisers, but the difference was close, and the portability aspect gave them fantastic utility. They remained comfortable to wear and listen to for hours on end, and were surprisingly low-leaking. I became convinced that on-ear was a good form factor for portable headphones.

To replace them, I decided to finally give wireless headphones a try. There are not a lot of on-ear, smaller form-factor wireless headphone models. I really wanted to like the Sony WH-H800s, which (I thought) looked stylish, and reviews for their bigger brother (the 1000 series over-ear) are fantastic. The 800s proved very hard to audition. I could only find one vendor in Newcastle with a pair for evaluation, Currys PC World, but the circumstances were very poor: a noisy store, the headphones tethered to a security frame on a very short leash, so I had to stoop to put them on; no ability to try my own music through the headset. The headset in actuality seemed poorly constructed, with the hard plastic seeming to be ill-fitting such that the headset rattled when I picked it up.

I therefore ended up buying the Bose on-ear wireless headphones. I was able to audition them in several different environments, using my own music, both over Bluetooth and via a cable. They are very comfortable, which is important for the use-case. I was a little nervous about reports on Bose sound quality, which is described as more sculpted than true to the source material, but I was happy with what I could hear in my demonstrations. What clinched it was a few other circumstances (that I won't elaborate on here) which brought the price down to comparable to what I paid for the AKG K451s.

A few months in, and the only criticism I have of the Bose headphones is I can get some mild discomfort on my helix if I have positioned them poorly. This has not turned out to be a big problem. One consequence of having wireless headphones, asides from increased convenience in the same listening circumstances I used wired headphones, is all the situations that I can use them that I wouldn't have bothered before, including a far wider range of house work chores, going up and down ladders, DIY jobs, etc. I'm finding myself consuming a lot more podcasts and programmes from BBC Radio, and experimenting more with streaming music.

Eddy Petrișor: Rust: How do we teach "Implementing traits in no_std for generics using lifetimes" without sutdents going mad?

10 July, 2019 - 06:57
I'm trying to go through Sergio Benitez's CS140E class and I am currently at Implementing StackVec. StackVec is something that currently, looks like this:

/// A contiguous array type backed by a slice.
///
/// `StackVec`'s functionality is similar to that of `std::Vec`. You can `push`
/// and `pop` and iterate over the vector. Unlike `Vec`, however, `StackVec`
/// requires no memory allocation as it is backed by a user-supplied slice. As a
/// result, `StackVec`'s capacity is _bounded_ by the user-supplied slice. This
/// results in `push` being fallible: if `push` is called when the vector is
/// full, an `Err` is returned.
#[derive(Debug)]
pub struct StackVec<'a, T: 'a> {
    storage: &'a mut [T],
    len: usize,
    capacity: usize,
}The initial skeleton did not contain the derive Debug and the capacity field, I added them myself.

Now I am trying to understand what needs to happens behind:
  1. IntoIterator
  2. when in no_std
  3. with a custom type which has generics
  4. and has to use lifetimes
I don't now what I'm doing, I might have managed to do it:

pub struct StackVecIntoIterator<'a, T: 'a> {
    stackvec: StackVec<'a, T>,
    index: usize,
}

impl<'a, T: Clone + 'a> IntoIterator for StackVec<'a, &'a mut T> {
    type Item = &'a mut T;
    type IntoIter = StackVecIntoIterator<'a, T>;

    fn into_iter(self) -> Self::IntoIter {
        StackVecIntoIterator {
            stackvec: self,
            index: 0,
        }
    }
}

impl<'a, T: Clone + 'a> Iterator for StackVecIntoIterator<'a, T> {
    type Item = &'a mut T;

    fn next(&mut self) -> Option {
        let result = self.stackvec.pop();
        self.index += 1;

        result
    }
}I was really struggling to understand what should the returned iterator type be in my case, since, obviously, std::vec is out because a) I am trying to do a no_std implementation of something that should look a little like b) a std::vec.

That was until I found this wonderful example on a custom type without using any already implemented Iterator, but defining the helper PixelIntoIterator struct and its associated impl block:

struct Pixel {
    r: i8,
    g: i8,
    b: i8,
}

impl IntoIterator for Pixel {
    type Item = i8;
    type IntoIter = PixelIntoIterator;

    fn into_iter(self) -> Self::IntoIter {
        PixelIntoIterator {
            pixel: self,
            index: 0,
        }

    }
}

struct PixelIntoIterator {
    pixel: Pixel,
    index: usize,
}

impl Iterator for PixelIntoIterator {
    type Item = i8;
    fn next(&mut self) -> Option {
        let result = match self.index {
            0 => self.pixel.r,
            1 => self.pixel.g,
            2 => self.pixel.b,
            _ => return None,
        };
        self.index += 1;
        Some(result)
    }
}


fn main() {
    let p = Pixel {
        r: 54,
        g: 23,
        b: 74,
    };
    for component in p {
        println!("{}", component);
    }
}The part in bold was what I was actually missing. Once I had that missing link, I was able to struggle through the generics part.

Note that, once I had only one new thing, the generics - luckly the lifetime part seemed it to be simply considered part of the generic thing - everything was easier to navigate.


Still, the fact there are so many new things at once, one of them being lifetimes - which can not be taught, only experienced @oli_obk - makes things very confusing.

Even if I think I managed it for IntoIterator, I am similarly confused about implementing "Deref for StackVec" for the same reasons.

I think I am seeing on my own skin what Oliver Scherer was saying about big infodumps at once at the beginning is not the way to go. I feel that if Sergio's class was now in its second year, things would have improved. OTOH, I am now very curious how does your curriculum look like, Oli?

All that aside, what should be the signature of the impl? Is this OK?

impl<'a, T: Clone + 'a> Deref for StackVec<'a, &'a mut T> {
    type Target = T;

    fn deref(&self) -> &Self::Target;
}Trivial examples like wrapper structs over basic Copy types u8 make it more obvious what Target should be, but in this case it's so unclear, at least to me, at this point. And because of that I am unsure what should the implementation even look like.

I don't know what I'm doing, but I hope things will become clear with more exercise.

Matthew Garrett: Bug bounties and NDAs are an option, not the standard

10 July, 2019 - 04:15
Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

(a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

comments

Sean Whitton: Upload to Debian with just 'git tag' and 'git push'

10 July, 2019 - 03:49

At a sprint over the weekend, Ian Jackson and I designed and implemented a system to make it possible for Debian Developers to upload new versions of packages by simply pushing a specially formatted git tag to salsa (Debian’s GitLab instance). That’s right: the only thing you will have to do to cause new source and binary packages to flow out to the mirror network is sign and push a git tag.

It works like this:

  1. DD signs and pushes a git tag containing some metadata. The tag is placed on the commit you want to release (which is probably the commit where you ran dch -r).

  2. This triggers a GitLab webhook, which passes the public clone URI of your salsa project and the name of the newly pushed tag to a cloud service called tag2upload.

  3. tag2upload verifies the signature on the tag against the Debian keyring,1 produces a .dsc and .changes, signs these, and uploads the result to ftp-master.2

    (tag2upload does not have, nor need, push access to anyone’s repos on salsa. It doesn’t make commits to the maintainer’s branch.)

  4. ftp-master and the autobuilder network push out the source and binary packages in the usual way.

The first step of this should be as easy as possible, so we’ve produced a new script, git debpush, which just wraps git tag and git push to sign and push the specially formatted git tag.

We’ve fully implemented tag2upload, though it’s not running in the cloud yet. However, you can try out this workflow today by running tag2upload on your laptop, as if in response to a webhook. We did this ourselves for a few real uploads to sid during the sprint.

  1. First get the tools installed. tag2upload reuses code from dgit and dgit-infrastructure, and lives in bin:dgit-infrastructure. git debpush is in a completely independent binary package which does not make any use of dgit.3

    % apt-get install git-debpush dgit-infrastructure dgit debian-keyring

    (you need version 9.1 of the first three of these packages, only in Debian unstable at the time of writing. However, the .debs will install on stretch & buster. And I’ll put them in buster-backports once that opens.)

  2. Prepare a source-only upload of some package that you normally push to salsa. When you are ready to upload this, just type git debpush.

    If the package is non-native, you will need to pass a quilt option to inform tag2upload what git branch layout you are using—it has to know this in order to produce a .dsc. See the git-debpush(1) manpage for the supported quilt options.

    The quilt option you select gets stored in the newly created tag, so for your next upload you won’t need it, and git debpush alone will be enough.

    See the git-debpush(1) manpage for more options, but we’ve tried hard to ensure most users won’t need any.

  3. Now you need to simulate salsa’s sending of a webhook to the tag2upload service. This is how you can do that:

    % mkdir -p ~/tmp/t2u
    % cd ~/tmp/t2u
    % DGIT_DRS_EMAIL_NOREPLY=myself@example.org dgit-repos-server \
        debian . /usr/share/keyrings/debian-keyring.gpg,a --tag2upload \
        https://salsa.debian.org/dgit-team/dgit-test-dummy.git debian/1.23
    

    … substituting your own service admin e-mail address, salsa repo URI and new tag name.

    Check the file ~/tmp/t2u/overall.log to see what happened, and perhaps take a quick look at Debian’s upload queue.

A few other notes about trying this out:

  • tag2upload will delete various files and directories in your working directory, so be sure to invoke it in an empty directory like ~/tmp/t2u.

  • You won’t see any console output, and the command might feel a bit slow. Neither of these will matter when tag2upload is running as a cloud service, of course. If there is an error, you’ll get an e-mail.

  • Running the script like this on your laptop will use your default PGP key to sign the .dsc and .changes. The real cloud service will have its own PGP key.

  • The shell invocation given above is complicated, but once the cloud service is deployed, no human is going to ever need to type it!

    What’s important to note is the two pieces of user input the command takes: your salsa repo URI, and the new tag name. The GitLab web hook will provide the tag2upload service with (only) these two parameters.

For some more discussion of this new workflow, see the git-debpush(1) manpage. We hope you have fun trying it out.

  1. Unfortunately, DMs can’t try tag2upload out on their laptops, though they will certainly be able to use the final cloud service version of tag2upload.
  2. Only source-only uploads are supported, but this is by design.
  3. Do not be fooled by the string ‘dgit’ appearing in the generated tags! We are just reusing a tag metadata convention that dgit also uses.

Thomas Lange: Talks, articles and a podcast in German

9 July, 2019 - 21:31

In April I gave a talk about FAI a the GUUG-Frühjahrsfachgespräch in Karlsruhe (Slides). At this nice meeting Ingo from RadioTux made an interview with me (https://www.radiotux.de/index.php?/archives/8050-RadioTux-Sendung-April-2019.html) (from 0:35:30).

Then I found an article in the iX Special 2019 magazine about automation in the data center which mentioned FAI. Nice. But I was very supprised and happy when I saw a whole article about FAI in the Linux Magazin 7/2019. A very good article with a some focus on network things, but also the class system and installing other distributions is described. And they will also publish another article about the FAI.me service in a few months. I'm excited!

In a few days, I going to DebConf19 in Curitiba for two weeks. I will work on Debian web stuff, check my other packages (rinse, dracut, tcsh) and hope to meet a lot of friendly people.

And in August I'm giving a talk at FrOSCon about FAI.

FAI

Steve Kemp: Upgraded my first host to buster

9 July, 2019 - 16:01

I upgrade the first of my personal machines to Debian's new stable release, buster, yesterday. So far two minor niggles, but nothing major.

My hosts are controlled, sometimes, by puppet. The puppet-master is running stretch and has puppet 4.8.2 installed. After upgrading my test-host to the new stable I discovered it has puppet 5.5 installed:

root@git ~ # puppet --version 5.5.10

I was not sure if there would be compatibility problems, but after reading the release notes nothing jumped out. Things seemed to work, once I fixed this immediate problem:

     # puppet agent --test
     Warning: Unable to fetch my node definition, but the agent run will continue:
     Warning: SSL_connect returned=1 errno=0 state=error: dh key too small
     Info: Retrieving pluginfacts
     ..

This error-message was repeated multiple times:

SSL_connect returned=1 errno=0 state=error: dh key too small

To fix this comment out the line in /etc/ssl/openssl.cnf which reads:

CipherString = DEFAULT@SECLEVEL=2

The second problem was that I use borg to run backups, once per day on most systems, and twice per day on others. I have an invocation which looks like this:

borg create ${flags} --compression=zlib  --stats ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S) \
   --exclude=/proc \
   --exclude=/swap.file \
   --exclude=/sys  \
   --exclude=/run  \
   --exclude=/dev  \
   --exclude=/var/log \
   /

That started to fail :

borg: error: unrecognized arguments: /

I fixed this by re-ordering the arguments such that it ended "destination path", and changing --exclude=x to --exclude x:

borg create ${flags} --compression=zlib  --stats \
   --exclude /proc \
   --exclude /swap.file \
   --exclude /sys  \
   --exclude /run  \
   --exclude /dev  \
   --exclude /var/log \
   ${dest}${self}::$(date +%Y-%m-%d-%H:%M:%S)  /

That approach works on my old and new hosts.

I'll leave this single system updated for a few more days to see what else is broken, if anything. Then I'll upgrade them in turn.

Good job!

Daniel Kahn Gillmor: DANE OPENPGPKEY for debian.org

9 July, 2019 - 11:00
DANE OPENPGPKEY for debian.org

I recently announced the publication of Web Key Directory for @debian.org e-mail addresses. This blog post announces another way to fetch OpenPGP certificates for @debian.org e-mail addresses, this time using only the DNS. These two mechanisms are complementary, not in competition. We want to make sure that whatever certificate lookup scheme your OpenPGP client supports, you will be able to find the appropriate certificate.

The additional mechanism we're now supporting (since a few days ago) is DANE OPENPGPKEY, specified in RFC 7929.

How does it work?

DANE OPENPGPKEY works by storing a minimized OpenPGP certificate in the DNS, ideally in a subdomain at label based on a hashed version of the local part of the e-mail address.

With modern GnuPG, if you're interested in retrieving the OpenPGP certificate for dkg as served by the DNS, you can do:

gpg --auto-key-locate clear,nodefault,dane --locate-keys dkg@debian.org

If you're interested in how this DNS zone is populated, take a look at can the code that handles it. Please request improvements if you see ways that this could be improved.

Unfortunately, GnuPG does not currently do DNSSEC validation on these records, so the cryptographic protections offered by this client are not as strong as those provided by WKD (which at least checks the X.509 certificate for a given domain name against the list of trusted root CAs).

Why offer both DANE OPENPGPKEY and WKD?

I'm hoping that the Debian project can ensure that no matter whatever sensible mechanism any OpenPGP client implements for certificate lookup, it will be able to find the appropriate OpenPGP certificate for contacting someone within the @debian.org domain.

A clever OpenPGP client might even consider these two mechanisms -- DANE OPENPGPKEY and WKD -- as corroborative mechanisms, since an attacker who happens to compromise one of them may find it more difficult to compromise both simultaneously.

How to update?

If you are a Debian developer and you want your OpenPGP certificate updated in the DNS, please follow the normal procedures for Debian keyring maintenance like you always have. When a new debian-keyring package is released, we will update these DNS records at the same time.

Thanks

Setting this up would not have been possible without help from weasel on the Debian System Administration team, and Noodles from the keyring-maint team providing guidance.

DANE OPENPGPKEY was documented and shepherded through the IETF by Paul Wouters.

Thanks to all of these people for making it possible.

Rodrigo Siqueira: Status Update, June 2019

9 July, 2019 - 10:00

For a long time, I’m cultivating the desire of getting the habit of writing monthly status update; in some way, Drew DeVault’s Blog posts and Martin Peres advice leverage me toward this direction. So, here I’m am! I decided to embrace the challenge of composing a report per month. I hope this new habit helps me to improve my write, summary, and communication skills; but most importantly, help me to keep track of my work. I want to start this update by describing my work conditions and then focus on the technical stuff.

The last two months, I have to face an infrastructure problem to work. I’m dealing with obstacles such as restricted Internet access and long hours in public transportation from my home to my workplace. Unfortunately, I cannot work in my house due to the lack of space, and the best place to work it is a public library at the University of Brasilia (UnB); go to UnB every day makes me wast around 3h per day in a bus. The library has a great environment, but it also has thousands of internet restrictions, for example, I cannot access websites with ‘.me’ domain and I cannot connect to my IRC bouncer. In summary: It has been hard to work these days. So, let’s stop to talk about non-technical stuff and let’s get to the heart of the matter.

I really like to work on VKMS, I know this isn’t news to anyone, and in June most of my efforts were dedicated to VKMS. One of my paramount endeavors it was found and fixed a bug in vkms that makes kms_cursor_crc, and kms_pipe_crc_basic fails; I was chasing this bug for a long time as can be seen here [1]. After many hours of debugging I sent a patch for handling this issue [2], however, after Daniel’s review, I realize that my patch does not correctly fix the problem. Daniel decided to dig into this issue and find out the root of the problem and later sent a final fix; if you want to see the solution, take a look at [3]. One day, I want to write a post about this fix since it is an interesting subject to discuss.

Daniel also noticed some concurrency problems in the CRC code and sent a patchset composed of 10 patches that tackle the issue. These patches focused on creating better framebuffers manipulation and avoiding race conditions; it took me around 4 days to take a look and test this series. During my review, I asked many things related to concurrency and other clarification about DRM, and Daniel always replied with a very nice and detailed explanation. If you want to learn a little bit more about locks, I recommend you to take a look at [4]; serious, it is really nice!

I also worked for adding the writeback support in vkms; since XDC2018 I could not stop to think about the idea of adding writeback connector in vkms due to the benefits it could bring, such as new test and assist developers with visual output. As a result, I started some clumsy attempts to implement it in January; but I really dove in this issue in the middle of April, and in June I was focused on making it work. It was tough for me to implement these features due to the following reasons:

  1. There isn’t i-g-t test for writeback in the main repository, I had to use a WIP patchset made by Brian and Liviu.
  2. I was not familiar with framebuffer, connectors, and fancy manipulation.

As a result of the above limitations, I had to invest many hours reading the documentation and the DRM/IGT code. In the end, I think that add writeback connectors paid well for me since I feel much more comfortable with many things related to DRM these days. The writeback support was not landed yet, however, at this moment the patch is under review (V3) and changed a lot since the first version; for details about this series take a look at [5]. I’ll write a post about this feature after it gets merged.

After having the writeback connectors working in vkms, I felt so grateful to Brian, Liviu, and Daniel for all the assistance they provided to me. In particular, I was thrilled that Brian and Liviu made kms_writeback test which worked as an implementation guideline for me; as a result, I update their patchset for making it work in the latest version of IGT and made some tiny fixes; my goal was helping them to upstream kms_writeback. I resubmit the series with the hope to see it landed in the IGT [9].

Parallel to my work with writeback, I was trying to figure out how could I expose vkms configurations to the userspace via configfs. After many efforts, I submitted the first version of configfs support; in this patchset I exposed the virtual and writeback connectors. Take a look at [6] for more information about this feature, and definitely, I’ll write a post about this feature after it gets landed.

Finally, I’m still trying to upstream a patch that makes drm_wait_vblank_ioctl return EOPNOTSUPP instead of EINVAL if the driver does not support vblank get landed. Since this change is in the DRM core and also change the userspace, it is not easy to make this patch get landed. For the details about this patch, you can take a look here [7]. I also implemented some changes in the kms_flip to validate the changes that I made in the function drm_wait_vblank_ioctl and it got landed [8].

July Aims

In June I was totally dedicated to vkms, now I want to slow my road a little bit and study more about userspace. I want to take a step back and make some tiny programs using libdrm with the goal to understand the interaction among userspace and kernel space. I also want to take a look at the theoretical part related to computer graphics.

I want to put some effort to improve a tool named kw that help me during my work with Linux Kernel. I also want to take a look at real overlay planes support in vkms. I noted that I have to find a “contribution protocol” (review/write code) that works for me in my current work conditions; otherwise, work will become painful for my relatives and me. Finally, and most importantly, I want to take some days off to enjoy my family.

Info: If you find any problem with this text, please let me know. I will be glad to fix it.

References

[1] “First discussion in the Shayenne’s patch about the CRC problem”. URL: https://lkml.org/lkml/2019/3/10/197

[2] “Patch fix for the CRC issue”. URL: https://patchwork.freedesktop.org/patch/308617/

[3] “Daniel final fix for CRC”. URL: https://patchwork.freedesktop.org/patch/308881/?series=61703&rev=1

[4] “Rework crc worker”. URL: https://patchwork.freedesktop.org/series/61737/

[5] “Introduces writeback support”. URL: https://patchwork.freedesktop.org/series/61738/

[6] “Introduce basic support for configfs”. URL: https://patchwork.freedesktop.org/series/63010/

[7] “Change EINVAL by EOPNOTSUPP when vblank is not supported”. URL: https://patchwork.freedesktop.org/patch/314399/?series=50697&rev=7

[8] “Skip VBlank tests in modules without VBlank”. URL: https://gitlab.freedesktop.org/drm/igt-gpu-tools/commit/2d244aed69165753f3adbbd6468db073dc1acf9a

[9] “Add support for testing writeback connectors”. URL: https://patchwork.freedesktop.org/series/39229/

Jonathan Wiltshire: Too close?

9 July, 2019 - 03:39

At times of stress I’m prone to topical nightmares, but they are usually fairly mundane – last night, for example, I dreamed that I’d mixed up bullseye and bookworm in one of the announcements of future code names.

But Saturday night was a whole different game. Imagine taking a rucksack out of the cupboard under the stairs, and thinking it a bit too heavy for an empty bag. You open the top and it’s full of small packages tied up with brown paper and string. As you take each one out and set it aside you realise, with mounting horror, that these are all packages missing from buster and which should have been in the release. But it’s too late to do anything about that now; you know the press release went out already because you signed it off yourself, so you can’t do anything else but get all the packages out of the bag and see how many were forgotten. And you dig, and count, and dig, and it’s like Mary Poppins’ carpet bag, and still they keep on coming…

Sometimes I wonder if I’ve got too close.

Andy Simpkins: Buster Release Party – Cambridge, UK

8 July, 2019 - 21:21

With the release of Debian GNU/Linux 10.0.0 “Buster” completing in the small hours of yesterday morning (0200hrs UTC or thereabouts) most of the ‘release parties’ had already been and gone…. Not so for the Cambridge contingent who had scheduled a get together for the Sunday [0], knowing that various attendees would have been working on the release until the end.

The Sunday afternoon saw a gathering in the Haymakers pub to celebrate a successful release.   We would publicly like to thank the Raspberry Pi foundation [1], and Mythic Beasts [2] who between them picked up the tab for our bar bill – Cheers and thank you!

 

 

 

 

Ian and Sean also managed to join us, taking time out from the dgit sprint that had been running since Friday [3].

For most of the afternoon we had the inside of the pub all to ourselves.   

This was a friendly and low key event as we decompressed from the previous day’s activities, but we were also joined many other DDs, users and supporters, mostly hanging out in the garden but I confess to staying most of the time indoors in the shade and with a breeze through the pub…

Laptops are still needed even at a party

The start of the next project…

 

[0]    http://www.chiark.greenend.org.uk/pipermail/debian-uk/2019-June/000481.html
       https://wiki.debian.org/ReleasePartyBuster/UK/Cambridge

[1]   https://www.raspberrypi.org/

[2]    https://www.mythic-beasts.com/

[3]    https://wiki.debian.org/Sprints/2019/DgitDebrebase

Matthew Garrett: Creating hardware where no hardware exists

8 July, 2019 - 02:46
The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

[1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
[2] It's actually more complicated than that - see here for more.
[3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
[4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
[5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

comments

Debian GSoC Kotlin project blog: Week 4 & 5 Update

7 July, 2019 - 18:51
Finihsed downgrading the project to be buildable by gradle 4.4.1

I have finished downgrading the project to be buildable using gradle 4.4.1. The project still needed a part of gradle 4.8 that I have successfully patched into sid gradle. here is the link to the changes that I have made.

Now we are officially done with making the project buidl with our gradle; so we can now go on ahead and finally start mapping out and packaging the dependencies.

Packaging dependencies for Kotlin-1.3.30

I split this task into two sub tasks that can be done independently. The 2 subtaks are as follows:
part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build part1.1:package these dependencies part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it. the task has been split into this exact model because this folder has a variety of jars that the project uses and we ll have to minimize it and pacakge the needed jars from it. Also the project uses other plugins and jars other than this one main folder which we can simultaneously map and package out.

I now have successfully mapped out the dependencies need by part 1: all that remains is to package em. I have copied the dependenceis from the original cache(one created when I build the project using ./gradlew -Pteamcity=true dist) to /usr/share/maven-repo so some of these dependencies still need to have their dependencies clearly defined as in which of their dependencies we can ommit and which we need. I have marked such dependencies with a "". So here are the dependencies:
jengeleman:shadow:4.0.3 --> https://github.com/johnrengelman/shadow trove4j 1.x -> https://github.com/JetBrains/intellij-deps-trove4j proguard:6.0.3 in jdk8
io.javaslang:2.0.6 jline 3.0.3 com.jcabi:jcabi-aether:1.0 org.sonatype.aether:aether-api:1.13.1

!!NOTE:Please note that I might have missed out some; I ll add em to the list once I get em mapped out proper!!

So if any of you kind souls wanna help me out please kindly take on any of these and pacakge em.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

Eriberto Mota: Debian: repository changed its ‘Suite’ value from ‘testing’ to ‘stable’

7 July, 2019 - 10:44

Debian 10 (Buster) was released two hours ago. \o/

When using ‘apt-get update’, I can see the message:

E: Repository 'http://security.debian.org/debian-security buster/updates InRelease' changed its 'Suite' value from 'testing' to 'stable'
N: This must be accepted explicitly before updates for this repository can be applied. See apt-secure(8) manpage for details.

 

Solution:

# apt-get --allow-releaseinfo-change update

 

Enjoy!

Bits from Debian: Debian 10 "buster" has been released!

7 July, 2019 - 08:25

You've always dreamt of a faithful pet? He is here, and his name is Buster! We're happy to announce the release of Debian 10, codenamed buster.

Want to install it? Choose your favourite installation media and read the installation manual. You can also use an official cloud image directly on your cloud provider, or try Debian prior to installing it using our "live" images.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 9 "stretch" installation; please read the release notes.

Do you want to celebrate the release? We provide some buster artwork that you can share or use as base for your own creations. Follow the conversation about buster in social media via the #ReleasingDebianBuster and #Debian10Buster hashtags or join an in-person or online Release Party!

Andy Simpkins: Debian Buster Release

7 July, 2019 - 07:30

I spent all day smoke testing the install images for yesterday’s (this mornings – gee just after midnight local so we still had 11 hours to spare) Debian GNU/Linux 10.0.0 “Buster” release.

This year we had our “Biglyest test matrix ever”[0], 111 tests were completed at the point the release images were signed and the ftp team pushed the button.  Although more tests were reported to have taken place in IRC we had a total of 9 people completing tests called for in the wiki test matrix.

We also had a large number of people in irc during the image test phase of the release – peeking at 117…

Steve kindly hosted his front room a few of us – using a local mirror[1] of the images so our VM tests have the image available as an NFS share really speeds things up!  Between the 4 of us here in Cambridge we were testing on a total of 14 different machines, mainly AMD64, a couple of “only i386 capable laptops” and 3 entirely different ARM64 machines! (Mustang, Synquacer, & MacchiatoBin).

Room heaters on a hot day – photo RandomBird

In the photo are Sledge, Codehelp, Isy and myself…[2]

A Special thanks should also be given to Schweer who spent time testing the Debian Edu images and to Zdykstra for testing on ppc64el [2].

 

Finally sledge posted a link of the bandwidth utilization on the distribution network servers last week it looked like this:

Debian primary distribution network last week

That is seeing 500MBytes/s daily peek in traffic.

Pushing Buster today the graph looks like:

Guess when Buster release went live?

1.5GByte/Second – that is some network load :-)

 

Anyway – Time to head home.  I have a release party to attend later this afternoon and would like *some* sleep first.

 

 

[0] https://wiki.debian.org/Teams/DebianCD/ReleaseTesting/Buster_r0

[1] Local mirror is another ARM Synquacer ‘Server’ – 24 core Cortex A53  (A53 series is targeted at Mobile Devices)

[2] irc nicks have been used to protect the identity of the guilty :-)

François Marier: SIP Encryption on VoIP.ms

7 July, 2019 - 06:00

My VoIP provider recently added support for TLS/SRTP-based call encryption. Here's what I did to enable this feature on my Asterisk server.

First of all, I changed the registration line in /etc/asterisk/sip.conf to use the "tls" scheme:

[general]
register => tls://mydid:mypassword@servername.voip.ms

then I enabled incoming TCP connections:

tcpenable=yes

and TLS:

tlsenable=yes
tlscapath=/etc/ssl/certs/

Finally, I changed my provider entry in the same file to:

[voipms]
type=friend
host=servername.voip.ms
secret=mypassword
username=mydid
context=from-voipms
allow=ulaw
allow=g729
insecure=port,invite
transport=tls
encryption=yes

(Note the last two lines.)

The dialplan didn't change and so I still have the following in /etc/asterisk/extensions.conf:

[pstn-voipms]
exten => _1NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _1NXXNXXXXXX,n,Dial(SIP/voipms/${EXTEN})
exten => _1NXXNXXXXXX,n,Hangup()
exten => _NXXNXXXXXX,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _NXXNXXXXXX,n,Dial(SIP/voipms/1${EXTEN})
exten => _NXXNXXXXXX,n,Hangup()
exten => _011X.,1,Set(CALLERID(all)=Francois Marier <5551234567>)
exten => _011X.,n,Authenticate(1234) ; require password for international calls
exten => _011X.,n,Dial(SIP/voipms/${EXTEN})
exten => _011X.,n,Hangup(16)
Server certificate

The only thing I still need to fix is to make this error message go away in my logs:

asterisk[8691]: ERROR[8691]: tcptls.c:966 in __ssl_setup: TLS/SSL error loading cert file. <asterisk.pem>

It appears to be related to the fact that I didn't set tlscertfile in /etc/asterisk/sip.conf and that it's using its default value of asterisk.pem, a non-existent file.

Since my Asterisk server is only acting as a TLS client, and not a TLS server, there's probably no harm in not having a certificate. That said, it looks pretty easy to use a Let's Encrypt cert with Asterisk.

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้