Planet Debian

Subscribe to Planet Debian feed
Planet Debian - https://planet.debian.org/
Updated: 19 min 52 sec ago

Lars Wirzenius: Ick ALPHA-6 released: CI/CD engine

21 June, 2018 - 23:34

It gives me no small amount of satisfaction to announce the ALPHA-6 version of ick, my fledgling continuous integration and deployment engine. Ick has been now deployed and used by other people than myself.

Ick can, right now:

  • Build system trees for containers.
  • Use system trees to run builds in containers.
  • Build Debian packages.
  • Publish Debian packages via its own APT repository.
  • Deploy to a production server.

There's still many missing features. Ick is by no means ready to replace your existing CI/CD system, but if you'd like to have a look at ick, and help us make it the CI/CD system of your dreams, now is a good time to give it a whirl.

(Big missing features: web UI, building for multiple CPU architectures, dependencies between projects, good documentation, a development community. I intend to make all of these happen in due time. Help would be welcome.)

John Goerzen: Making a difference

21 June, 2018 - 01:24

Every day, ask yourself this question: What one thing can I do today that will make this democracy stronger and honor and support its institutions? It doesn’t have to be a big thing. And it probably won’t shake the Earth. The aggregation of them will shake the Earth.

– Benjamin Wittes

I have written some over the past year or two about the dangers facing the country. I have become increasingly alarmed about the state of it. And that Benjamin Wittes quote, along with the terrible tragedy, spurred me to action. Among other things, I did two things I never have done before:

I registered to protest on June 30.

I volunteered to do phone banking with SwingLeft.

And I changed my voter registration from independent to Republican.

No, I have not gone insane. The reason for the latter is that here in Kansas, the Democrats rarely field candidates for most offices. The real action happens in the Republican primary. So if I can vote in that primary, I can have a voice in keeping the craziest of the crazy out of office. It’s not much, but it’s something.

Today we witnessed, hopefully, the first victory in our battle against the abusive practices happening to children at the southern border. Donald Trump caved, and in so doing, implicitly admitted the lies he and his administration have been telling about the situation. This only happened because enough people thought like Wittes: “I am small, but I can do SOMETHING.” When I called the three Washington offices of my senators and representatives — far-right Republicans all — it was apparent that I was by no means the first to give them an earful about this, and that they were changing their tone because of what they heard. Mind you, they hadn’t taken any ACTION yet, but the calls mattered. The reporting mattered. The attention mattered.

I am going to keep doing what little bit I can. I hope everyone else will too. Let us shake the Earth.

Julien Danjou: Stop merging your pull requests manually

20 June, 2018 - 22:53

If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.

Nevertheless, every day, they are thousands of developers using GitHub that are doing the same thing over and over again: they click on this button:

This does not make any sense.

Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.

It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:

  • Is the test suite passing?
  • Is the documentation up to date?
  • Does this follow our code style guideline?
  • Have N developers reviewed this?

As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?

In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.

When those conditions are all set, I want the code to be merged.

Without clicking a single button.

That's exactly how Mergify started.

Mergify is a service that pushes that merge button for you. You define rules in the .mergify.yml file of your repository, and when the rules are satisfied, Mergify merges the pull request.

No need to press any button.

Take a random pull request, like this one:

This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.

With Mergify enabled, you'd just have to put this .mergify.yml a the root of the repository:

rules:
  default:
    protection:
      required_status_checks:
        contexts:
          - continuous-integration/travis-ci
      required_pull_request_reviews:
        required_approving_review_count: 1

With such a configuration, Mergify enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.

We built Mergify as a free service for open-source projects. The engine powering the service is also open-source.

Now go check it out and stop letting those pull requests hang out one second more. Merge them!

If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!

Craig Small: Odd dependency on Google Chrome

20 June, 2018 - 18:21

For weeks I have had problems with Google Chrome. It would work very few times and then for reasons I didn’t understand, would stop working. On the command line you would get several screens of text, but never would the Chrome window appear.

So I tried the Beta, and it worked… once.

Deleted all the cache and configuration and it worked… once.

Every time the process would be in an infinite loop listening to a Unix socket (fd 7) but no window for the second and subsequent starts of Chrome.

By sheer luck in the screenfulls of spam I noticed this:

Gkr-Message: 21:07:10.883: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files

Hmm, so I noticed every time I started a fresh new Chrome, I logged into my Google account. So, once again clearing things I started Chrome, didn’t login and closed and reopened.  I had Chrome running the second time! Alas, not with all the stuff synchronised.

An issue for Mailspring put me onto the right path. installing gnome-keyring (or the dependencies p11-kit and gnome-keyring-pkcs11) fixed Chrome.

So if Chrome starts but you get no window, especially if you use cinnamon, try that trick.

 

 

Jonathan Carter: Plans for DebCamp18

20 June, 2018 - 15:32

Dates

I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda
  • DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http).
  • DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos.
  • Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well).
  • Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around.
  • AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!)
  • Get some help on ITPs that have been a little bit more tricky than expected:
    • gamemode – Adjust power saving and cpu governor settings when launching games
    • notepadqq – A linux clone of notepad++, a popular text editor on Windows
    • Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc.
  • Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages.
  • Get to know more Debian people, relax and socialize!

Athos Ribeiro: Triggering Debian Builds on OBS

20 June, 2018 - 09:26

This is my fifth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian builds on OBS

OBS supports building Debian packages. To do so, one must properly configure a project so OBS knows it is building a .deb package and to have the packages needed to handle and build debian packages installed.

openSUSE’s OBS instance has repositories for Debian 8, Debian 9, and Debian testing.

We will use base Debian projects in our OBS instance as Download on Demand projects and use subprojects to achieve our final goal (build packages agains Clang). By using the same configurations as the ones in the openSUSE public projects, we could perform builds in Debian 8 and Debian 9 in our local OBS deploys. However, builds for Debian Testing and Unstable were failing.

With further investigation, we realized the OBS version packaged in Debian cannot decompress control.tar.xz files in .deb packages, which is the default compression format for the control tarball since dpkg-1.19 (it used to be control.tar.gz before that). This issue was reported on the OBS repositories and was fixed on a Pull Request that is not included in the current Debian OBS version yet. For now, we apply this patch in our OBS instance on our salt states.

After applying the patch, the builds on Debian 8 and 9 are still finishing with success, but builds against Debian Testing and Unstable are getting stuck in a blocked state: dependencies are being downloaded, the OBS scheduler stalls for a while, the downloaded packages get cleaned up, and then the dependencies are downloaded again. OBS backend enters in a loop doing the described procedure and never assigns a build to a worker. No logs with hints leading to a possible issue are issued, giving us no clue of the current problem.

Although I am inclined to believe we have a problem with our dependencies list, I am still debugging this issue during this week and will bring more news on my next post.

Refactoring project configuration files

Reshabh opened a Pull Request in our salt repository with the OBS configuration files for Ubuntu, also based on the openSUSE’s OBS public configurations. Based on Sylvestre comments, I have been refactoring the Debian configuration files based on the OBS docuemtation. One of the proposed improvements is to use debootstrap to mount the builder chroot. This will allow us to reduce the number of dependencies listed in the projects configuration files. The issue which generated debootstrap support in OBS is available at https://github.com/openSUSE/obs-build/issues/111 and may lead to more interesting resources on the matter.

Next steps (A TODO list to keep on the radar)
  • Fix OBS builds on Debian Testing and Unstable
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

Shashank Kumar: Google Summer of Code 2018 with Debian - Week 5

20 June, 2018 - 01:30

During week 5, there were 3 merge requests undergoing review process simultaneously. I learned a lot about how code should be written in order to assist the reader since the code is read more times than the time it is written.

Services and Utility

After the user has entered their information on the signin or signup screen, the job of querying the database was given to a module named updatedb. The job of updatedb was to clean user input, hash password, query the database and respond with appropriate result after the database query is executed. In a discussion with Sanyam, he said updatedb doesn't conform to its name with what functions it incorporated. And explained the virtue of Service and Utility modules/functions and that this is the best place to restructure code with the same.

Utility functions can be described roughly as the functions which perform some operations on the data without caring much about the relationship of the data with respect to the application. So, generating uuid, cleaning email address, cleaning full name and hashing password becomes out utility functions and can be seen in utils.py for signup and similarly for signin.

Service functions can be described roughly as the functions which while performing operations on the data take their relationship with the application into account. Hence, these functions are not generic and application specific. sign_up_user is one such service function which received user information, calls utility functions to modify that information, query the database with respect to the signup operation i.e. adding the new user's detail to the database or raise SignUpError if details are already present. This can be seen in services module for signup and signin as well.

Persisting database connection

This is how the connection to the database used to work before the review. The settings module used to create the connection to the database, create table schema if not present and close the connection. Few constants are saved in the module to be used by signup and signin in order to connect to the database. But, the problem is, now database connection has to be established everytime there's a query to be executed by the services of signup or signin. Since the sqlite3 database is saved in a file alongside the application, I though it'll not be a problem to make connection whenever needed. But it overhead on the OS now which can slow down the application when scaled. To resolve this, now settings return the connection object which can be used again in any other module.

Integrating SignUp with Dashboard

While the SignUp feature was being reviewed the Dashbaord was merged and I had to refactor SignUp merge request accordingly. The natural flow of this should be the SignUp being the default screen up on the UI and after successful signup operation the Dashboard should be displayed. To achieve such a flow, I used screen manager which handles different screens and transition between them with predefined animation. This is defined in main module and the entire flow can be seen in action below.

Your browser does not support HTML5 video. Designing Tutorials and Tools menu

Once user is on the Dashboard, they have an option of picking up from different modules and going through the tutorials and tools available in the respective modules. The idea is to display difficulty tip as well so it becomes easier for the user to begin. Hence, below is what I've designed in order to incorporate the same.

Implementing Tutorials and Tools menu

Now comes the fun part, thinking about the architecture of the modules just designed in order for them to take shape of some code in the application. The idea here is to define them in a json file to be picked from the respective module afterwards. This way it'll be easier to add new tutorials and tools and hence we have this resultant json. The developement of this feature can be followed on this merge request

Now remains the quest to design and implement the structure of tutorials which can be generalized in a way that it can be populated using a json file. This will provide flexibility to the developer of tutorials and a UI module can also be implemented to modify this json to add new tutorials without even knowing how to code. Sounds amazing right? We'll see how it works out soon. If you have any suggestions this make sure to comment down below, on the merge request or reach out to me.

The Conclusion

Since the SignUp has also been merged I'll have to refactor SignIn now to integrate all of it in one happy application and complete the natural flow of things. Also, the design and development of tools/tutorials is underway and by the next blog is out you might be able to test the application with atleast one tool or tutorial from one of the modules on the dashboard.

Benjamin Mako Hill: How markets coopted free software’s most powerful weapon (LibrePlanet 2018 Keynote)

20 June, 2018 - 01:03

Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production.

A video of the talk is online on Youtube and available as WebM video file (both links should skip the first 3m 19s of thanks and introductions).

Here’s a summary of the talk:

App stores and the so-called “sharing economy” are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn’t exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing.

The result is an important shift: A decade ago,  the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what’s available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth.

In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities’ most powerful weapon, is being used against them.

I’m very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc.

Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu.

Sean Whitton: I'm going to DebCamp18, Hsinchu, Taiwan

19 June, 2018 - 22:43

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items.

DebCamp work Throughout DebCamp and DebConf
  • Debian Policy: sticky bugs; process; participation; translations

  • Helping people use dgit and git-debrebase

    • Writing up or following up on feature requests and bugs

    • Design work with Ian and others

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2018

19 June, 2018 - 15:27

Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Erich Schubert: Predatory publishers: SciencePG

19 June, 2018 - 15:12

I got spammed again by SciencePG (“Science Publishing Group”).

One of many (usually Chinese or Indian) fake publishers, that will publish anything as long as you pay their fees. But, unfortunately, once you published a few papers, you inevitably land on their spam list: they scrape the websites of good journals for email adresses, and you do want your contact email address on your papers.

However, this one is particularly hilarious: They have a spelling error right at the top of their home page!

Fail.

Speaking of fake publishers. Here is another fun example:

Kim Kardashian, Satoshi Nakamoto, Tomas Pluskal
Wanion: Refinement of RPCs.
Drug Des Int Prop Int J 1(3)- 2018. DDIPIJ.MS.ID.000112.

Yes, that is a paper in the “Drug Designing & Intellectual Properties” International (Fake) Journal. And the content is a typical SciGen generated paper that throws around random computer buzzword and makes absolutely no sense. Not even the abstract. The references are also just made up. And so are the first two authors, VIP Kim Kardashian and missing Bitcoin inventor Satoshi Nakamoto…

In the PDF version, the first headline is “Introductiom”, with “m”…

So Lupine Publishers is another predatory publisher, that does not peer review, nor check if the article is on topic for the journal.

Via Retraction Watch

Conclusion: just because it was published somewhere does not mean this is real, or correct, or peer reviewed…

Reproducible builds folks: Reproducible Builds: Weekly report #164

19 June, 2018 - 14:40

Here’s what happened in the Reproducible Builds effort between Sunday June 10 and Saturday June 16 2018:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 96 was uploaded to Debian unstable by Chris Lamb. It includes contributions already covered by posts in previous weeks as well as new ones from:

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

Packages reviewed and fixed, and bugs filed Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Arthur Del Esposte: GSoC Status Update - First Month

19 June, 2018 - 10:00

In the past month I have been working on my GSoC project in Debian’s Distro Tracker. This project aims at designing and implementing new features in Distro Tracker to better support Debian teams to track the health of their packages and to prioritize their work efforts. In this post, I will describe the current status of my contributions, highlight the main challenges, and point the next steps.

Work Management and Communication

I communicate with Lucas Kanashiro (my mentor) constantly via IRC and personally at least once a week as we live in the same city. We have a weekly meeting with Raphael Hertzog at #debian-qa IRC channel to report advances, collect feedback, solve technical doubts, and planning the next steps.

I created a new repository in Salsa to save the log of our IRC meetings and to track my tasks through the repository’s issue tracker.Besides that, once a month I’ll post a new status update in my blog, such as this one, with more details regarding my contributions.

Advances

When GSoC officially started, Distro Tracker already had some team-related features. Briefly, a team is an entity composed by one or more users that are interested in the same set of packages. Teams are created manually by users and anyone may join public teams. The team page aggregates some basic information about the team and the list of packages of interest.

Distro Tracker offers a page to enable users to browser public teams which shows a paginated, sorted list of names. It used to be hard to find a team based on this list since Distro Tracker has more 110 teams distributed over 6 pages. In this sense, I created a new search field with auto-complete on the top of teams page to enable users to find a team’s page faster, as show in the following figure:

Also, I have been working on improving the current teams infrastructure to enable Debian’s teams to better track the health of their packages. Initially, we decided to use the current data available in Distro Tracker to create the first version of a new team’s page based on PET.

Presenting team’s packages data in a table on the team’s page would be a relatively trivial task. However, Distro Tracker architecture aims to provide a generic core which can be extended through specific distro applications, such as Kali Linux. The core source code provides generic infrastructure to import data related to deb packages and also to present them in HTML pages. Therefore, we had to consider this Distro Tracker requirement to properly provide a extensible infrastructure to show packages data through tables in so that it would be easy to add new table fields and to change the default behavior of existing columns provided by the core source code.

So, based on the previously existing panels feature and on Hertzog’s suggestions, I designed and developed a framework to create customizable package tables for teams. This framework is composed of two main classes:

  • BaseTableField - A base class representing fields to be displayed on package tables. Among other things, it must define the column name and a template to render the cell content for a package.
  • BasePackageTable - A base class representing package tables which are displayed on a team page. It may have several BaseTableFields to display package’s information. Different tables may show a different list of packages based on its scope.

We have been discussing my implementation in an open Merge Request, although we are very close to the version that should be incorporated. The following figures show the comparison between the earlier PET’s table and our current implementation.

PET Packages Table Distro Tracker Packages Table

Currently, the team’s page only have one table, which displays all packages related to that team. We are already presenting a very similar set of data to PET’s table. More specifically, the following columns are shown:

  • Package - displays the package name on the cell. It is implemented by the core’s GeneralInformationTableField class
  • VCS - by default, it displays the type of package’s repository (i.e. GIT, SVN) or Unknown. It is implemented by the core’s VcsTableField class. However, Debian app extend this behavior by adding the changelog version on the latest repository tag and displaying issues identified by Debian’s VCS Watch.
  • Archive - displays the package version on distro archive. It is implemented by the core’s ArchiveTableField class.
  • Bugs - displays the total number of bugs of a package. It is implemented by the core’s BugsTableField class. Ideally, each third-party apps should extend this field table to both add links for their bug tracker system.
  • Upstream - displays the upstream latest version available. This is a specific table field implemented by Debian app since this data is imported through Debian-specific tasks. In this sense, it is not available for other distros.

As the table’s cells are small to present detailed information, we have added Popper.js, a javascript library to display popovers. In this sense, some columns show a popover with more details regarding its content which is displayed on mouse hover. The following figure shows the popover to the Package column:

In additional to designing the table framework, the main challenge were to avoid the N+1 problem which introduces performance issues since for a set of N packages displayed in a table, each field element must perform 1 or more lookup for additional data for a given package. To solve this problem, each subclass of BaseTableField must define a set of Django’s Prefetch objects to enable BasePackageTable objects to load all required data in batch in advance through prefetch_related, as listed bellow.

class BasePackageTable(metaclass=PluginRegistry):
    @property
    def packages_with_prefetch_related(self):
        """
        Returns the list of packages with prefetched relationships defined by
        table fields
        """
        package_query_set = self.packages
        for field in self.table_fields:
            for l in field.prefetch_related_lookups:
                package_query_set = package_query_set.prefetch_related(l)

        additional_data, implemented = vendor.call(
            'additional_prefetch_related_lookups'
        )
        if implemented and additional_data:
            for l in additional_data:
                package_query_set = package_query_set.prefetch_related(l)
        return package_query_set

    @property
    def packages(self):
        """
        Returns the list of packages shown in the table. One may define this
        based on the scope
        """
        return PackageName.objects.all().order_by('name')


class ArchiveTableField(BaseTableField):
    prefetch_related_lookups = [
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='general'),
            to_attr='general_archive_data'
        ),
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='versions'),
            to_attr='versions'
        )
    ]

    @cached_property
    def context(self):
        try:
            info = self.package.general_archive_data[0]
        except IndexError:
            # There is no general info for the package
            return

        general = info.value

        try:
            info = self.package.versions[0].value
            general['default_pool_url'] = info['default_pool_url']
        except IndexError:
            # There is no versions info for the package
            general['default_pool_url'] = '#'

        return general

Finally, it is worth noticing that we also improved the team’s management page by moving all team management features to a single page and improving its visual structure:

Next Steps

Now, we are moving towards adding other tables with different scopes, such as the tables presented by PET:

To this end, we will introduce the Tag model class to categorize the packages based on their characteristics. Thus, we will create an additional task responsible for tagging packages based on their available data. The relationship between packages and tags should be ManyToMany. In the end, we want to perform a simple query to define the scope of a new table, such as the following example to query all packages with Release Critical (RC) bugs:

class RCPackageTable(BasePackageTable):
    def packages(self):
      tag = Tag.objects.filter(name='rc-bugs')
      return tag.packages.all()

We probably will need to work on Debian’s VCSWatch to enable it to receive update through Salsa’s webhook, especially for real-time monitoring of repositories.

Let’s get moving on! \m/

Gunnar Wolf: Demoting multi-factor authentication

19 June, 2018 - 08:11

I started teaching at Facultad de Ingeniería, UNAM in January 2013. Back then, I was somewhat surprised (for good!) that the university required me to create a digital certificate for registering student grades at the end of the semester. The setup had some not-so-minor flaws (i.e. the private key was not generated at my computer but centrally, so there could be copies of it outside my control — Not only could, but I noted for a fact a copy was kept at the relevant office at my faculty, arguably to be able to timely help poor teachers if they lost their credentials or patience), but was decent...
Authentication was done via a Java applet, as there needs to be a verifiably(?)-secure way to ensure the certificate was properly checked at the client without transfering it over the network. Good thing!
But... Java applets grow out of favor. I don't think I have ever been able to register my grading from a Linux desktop (of course, I don't have a typical Linux desktop, so luck might smile to other people). But last semester and this semester I suffered even to get the grades registered from Windows — Seems that every browser has deprecated the extensions for the Java runtime, and applets are no longer a thing. I mean, I could get the Oracle site to congratulate me for having Java 8 installed, but it just would not run the university's applet!
So, after losing the better part of an already-busy evening... I got a mail. It says (partial translation mine):

Subject: Problems to electronically sign at UNAM

We are from the Advance Electronic Signature at UNAM. We are sending you this mail as we have detected you have problems to sign the grades, probably due to the usage of Java.

Currently, we have a new Electronic Signature system that does not use Java, we can migrate you to this system.
(...)

The certificate will thus be stored in the cloud, we will deposit it at signing time, you just have to enter the password you will have assigned.
(...)

Of course, I answered asking which kind of "cloud" was it, as we all know that the cloud does not exist, it's just other people's computers... And they decided to skip this question.

You can go see what is required for this implementation at https://www.fea.unam.mx/Prueba de la firma (Test your signature): It asks me for my CURP (publicly known number that identifies every Mexican resident). Then, it asks me for a password. And that's it. Yay :-Þ

Anyway I accepted, as losing so much time to grade is just too much. And... Yes, many people will be happy. Partly, I'm releieved by this (I have managed to hate Java for over 20 years). I am just saddened by the fact we have lost an almost-decent-enough electronic signature implementation and fallen back to just a user-password scheme. There are many ways to do crypto verification on the client side nowadays; I know JavaScript is sandboxed and cannot escape to touch my filesystem, but... It is amazing we are losing this simple and proven use case.

And it's amazing they are pulling it off as if it were a good thing.

Benjamin Mako Hill: Honey Buckets

19 June, 2018 - 05:40

When I was growing up in Washington state, a company called Honey Bucket held a dominant position in the local portable toilet market. Their toilets are still a common sight in the American West.

Honey Bucket brand portable toilet. Photo by donielle. (CC BY-SA)

They were so widespread when I was a child that I didn’t know that “Honey Bucket” was the name of a company at all until I moved to Massachusetts for college. I thought “honey bucket” was just the generic term for toilets that could be moved from place-to-place!

So for the first five years that I lived in Massachusetts, I continued to call all portable toilets “honey buckets.” Until somebody asked me why I called them “honey buckets”—five years after moving!—all my friends in Massachusetts thought that “honey bucket” was just a personal, idiosyncratic, and somewhat gross, euphemism.

Russell Coker: Cooperative Learning

18 June, 2018 - 19:28

This post is about my latest idea for learning about computers. I posted it to my local LUG mailing list and received no responses. But I still think it’s a great idea and that I just need to find the right way to launch it.

I think it would be good to try cooperative learning about Computer Science online. The idea is that everyone would join an IRC channel at a suitable time with virtual machine software configured and try out new FOSS software at the same time and exchange ideas about it via IRC. It would be fairly informal and people could come and go as they wish, the session would probably go for about 4 hours but if people want to go on longer then no-one would stop them.

I’ve got some under-utilised KVM servers that I could use to provide test VMs for network software, my original idea was to use those for members of my local LUG. But that doesn’t scale well. If a larger group people are to be involved they would have to run their own virtual machines, use physical hardware, or use trial accounts from VM companies.

The general idea would be for two broad categories of sessions, ones where an expert provides a training session (assigning tasks to students and providing suggestions when they get stuck) and ones where the coordinator has no particular expertise and everyone just learns together (like “let’s all download a random BSD Unix and see how it compares to Linux”).

As this would be IRC based there would be no impediment for people from other regions being involved apart from the fact that it might start at 1AM their time (IE 6PM in the east coast of Australia is 1AM on the west coast of the US). For most people the best times for such education would be evenings on week nights which greatly limits the geographic spread.

While the aims of this would mostly be things that relate to Linux, I would be happy to coordinate a session on ReactOS as well. I’m thinking of running training sessions on etbemon, DNS, Postfix, BTRFS, ZFS, and SE Linux.

I’m thinking of coordinating learning sessions about DragonflyBSD (particularly HAMMER2), ReactOS, Haiku, and Ceph. If people are interested in DragonflyBSD then we should do that one first as in a week or so I’ll probably have learned what I want to learn and moved on (but not become enough of an expert to run a training session).

One of the benefits of this idea is to help in motivation. If you are on your own playing with something new like a different Unix OS in a VM you will be tempted to take a break and watch YouTube or something when you get stuck. If there are a dozen other people also working on it then you will have help in solving problems and an incentive to keep at it while help is available.

So the issues to be discussed are:

  1. What communication method to use? IRC? What server?
  2. What time/date for the first session?
  3. What topic for the first session? DragonflyBSD?
  4. How do we announce recurring meetings? A mailing list?
  5. What else should we setup to facilitate training? A wiki for notes?

Finally while I list things I’m interested in learning and teaching this isn’t just about me. If this becomes successful then I expect that there will be some topics that don’t interest me and some sessions at times when I am have other things to do (like work). I’m sure people can have fun without me. If anyone has already established something like this then I’d be happy to join that instead of starting my own, my aim is not to run another hobbyist/professional group but to learn things and teach things.

There is a Wikipedia page about Cooperative Learning. While that’s interesting I don’t think it has much relevance on what I’m trying to do. The Wikipedia article has some good information on the benefits of cooperative education and situations where it doesn’t work well. My idea is to have a self-selecting people who choose it because of their own personal goals in terms of fun and learning. So it doesn’t have to work for everyone, just for enough people to have a good group.

Related posts:

  1. How to Start Learning Linux I was asked for advice on how to start learning...
  2. Open Source Learning Richard Baraniuk gave an interesting TED talk about Open Source...
  3. SecureCon Lecture On Thursday at Secure Con [1] I gave a lecture...

John Goerzen: Memories, Father’s Day, and an 89-year-old plane

18 June, 2018 - 14:59

“Oh! I have slipped the surly bonds of Earth
And danced the skies on laughter-silvered wings;
Sunward I’ve climbed, and joined the tumbling mirth
of sun-split clouds, — and done a hundred things”

– John Gillespie Magee, Jr.

I clicked on the radio transmitter in my plane.

O’Neill Traffic, Bonanza xx departing to the south. And Trimotor, thanks for flight #1. We really enjoyed it.

And we had. Off to my left, a 1929 Ford Trimotor airliner was heading off into the distance, looking as if it were just hanging in the air, glinting in the morning sun, 1000 feet above the ground. Earlier that morning, my boys and I had been passengers in that very plane. But now we had taken off right after them, as they were taking another load of passengers up for a flight and we were flying back home. To my right was my 8-year-old, and my 11-year-old was in back, both watching out the windows. The radio clicked on, and the three of us heard the other pilot’s response:

Oh thank you. We’re glad you came!

A few seconds later, they were gone out of sight.

The experience of flying in an 89-year-old airliner is quite something. As with the time we rode on the Durango & Silverton railroad, it felt like stepping back into a time machine — into the early heyday of aviation.

Jacob and Oliver had been excited about this day a long time. We had tried to get a ride when it was on tour in Oklahoma, much closer, but one of them got sick on the drive that day and it didn’t work out. So Saturday morning, we took the 1.5-hour-flight up to northern Nebraska. We’d heard they’d have a pancake breakfast fundraiser, and the boys were even more excited. They asked to set the alarm early, so we’d have no risk of missing out on airport pancakes.

Jacob took this photo of the sunrise at the airport while I was doing my preflight checks:

Here’s one of the beautiful views we got as we flew north to meet the Trimotor.

It was quite something to share a ramp with that historic machine. Here’s a photo of our plane not far from the Trimotor.

After we got there, we checked in for the flight, had a great pancake and sausage breakfast, and then into the Trimotor. The engines fired up with a most satisfying low rumble, and soon we were aloft — cruising along at 1000 feet, in that (by modern standards) noisy, slow, and beautiful machine. We explored the Nebraska countryside from the air before returning 20 minutes later. I asked the boys what they thought.

“AWESOME!” was the reply. And I agreed.

Jacob and Oliver have long enjoyed pretending to be flight attendants when we fly somewhere. They want me to make airline-sounding announcements, so I’ll say something like, “This is your captain speaking. In a few moments, we’ll begin our descent into O’Neill. Flight attendants, prepare the cabin for arrival.” Then Jacob will say, “Please return your tray tables that you don’t have to their full upright and locked position, make sure your seat belt is tightly fastened, and your luggage is stowed. This is your last chance to visit the lavatory that we don’t have. We’ll be on the ground shortly.”

Awhile back, I loaded up some zip-lock bags with peanuts and found some particularly small bottles of pop. Since then, it’s become tradition on our longer flights for them to hand out bags of peanuts and small quantities of pop as we cruise along — “just like the airlines.” A little while back, I finally put a small fridge in the hangar so they get to choose a cold beverage right before we leave. (We don’t typically have such things around, so it’s a special treat.)

Last week, as I was thinking about Father’s Day, I told them how I remembered visiting my dad at work, and how he’d let me get a bottle of Squirt from the pop machine there (now somewhat rare). So when we were at the airport on Saturday, it brought me a smile to hear, “DAD! This pop machine has Squirt! Can we get a can? It’s only 75 cents!” “Sure – after our Trimotor flight.” “Great! Oh, thank you dad!”

I realized then I was passing a small but special memory on to another generation. I’ve written before of my childhood memories of my dad, and wondering what my children will remember of me. Martha isn’t old enough yet to remember her cackles of delight as we play peek-a-boo or the books we read at bedtime. Maybe Jacob and Oliver will remember our flights, or playing with mud, or researching dusty maps in a library, playing with radios, or any of the other things we do. Maybe all three of them will remember the cans of Squirt I’m about to stock that hangar fridge with.

But if they remember that I love them and enjoy doing things with them, they will have remembered the most important thing. And that is another special thing I got from my parents, and can pass on to another generation.

Steve Kemp: Monkeying around with intepreters - Result

18 June, 2018 - 10:15

So I challenged myself to writing a BASIC intepreter over the weekend, unfortunately I did not succeed.

What I did was take an existing monkey-repl and extend it with a series of changes to make sure that I understood all the various parts of the intepreter design.

Initially I was just making basic changes:

  • Added support for single-line comments.
    • For example "// This is a comment".
  • Added support for multi-line comments.
    • For example "/* This is a multi-line comment */".
  • Expand \n and \t in strings.
  • Allow the index operation to be applied to strings.
    • For example "Steve Kemp"[0] would result in S.
  • Added a type function.
    • For example "type(3.13)" would return "float".
    • For example "type(3)" would return "integer".
    • For example "type("Moi")" would return "string".

Once I did that I overhauled the built-in functions, allowing callers to register golang functions to make them available to their monkey-scripts. Using this I wrote a simple "standard library" with some simple math, string, and file I/O functions.

The end result was that I could read files, line-by-line, or even just return an array of the lines in a file:

 // "wc -l /etc/passwd" - sorta
 let lines = file.lines( "/etc/passwd" );
 if ( lines ) {
    puts( "Read ", len(lines), " lines\n" )
 }

Adding file I/O was pretty neat, although I only did reading. Handling looping over a file-contents is a little verbose:

 // wc -c /etc/passwd, sorta.
 let handle = file.open("/etc/passwd");
 if ( handle < 0 ) {
   puts( "Failed to open file" )
 }

 let c = 0;       // count of characters
 let run = true;  // still reading?

 for( run == true ) {

    let r = read(handle);
    let l = len(r);
    if ( l > 0 ) {
        let c = c + l;
    }
    else {
        let run = false;
    }
 };

 puts( "Read " , c, " characters from file.\n" );
 file.close(handle);

This morning I added some code to interpolate hash-values into a string:

 // Hash we'll interpolate from
 let data = { "Name":"Steve", "Contact":"+358449...", "Age": 41 };

 // Expand the string using that hash
 let out = string.interpolate( "My name is ${Name}, I am ${Age}", data );

 // Show it worked
 puts(out + "\n");

Finally I added some type-conversions, allowing strings/floats to be converted to integers, and allowing other value to be changed to strings. With the addition of a math.random function we then got:

 // math.random() returns a float between 0 and 1.
 let rand = math.random();

 // modify to make it from 1-10 & show it
 let val = int( rand * 10 ) + 1 ;
 puts( "math.random() -> ", val , "\n");

The only other signification change was the addition of a new form of function definition. Rather than defining functions like this:

 let hello = fn() { puts( "Hello, world\n" ) };

I updated things so that you could also define a function like this:

 function hello() { puts( "Hello, world\n" ) };

(The old form still works, but this is "clearer" in my eyes.)

Maybe next weekend I'll try some more BASIC work, though for the moment I think my monkeying around is done. The world doesn't need another scripting language, and as I mentioned there are a bunch of implementations of this around.

The new structure I made makes adding a real set of standard-libraries simple, and you could embed the project, but I'm struggling to think of why you would want to. (Though I guess you could pretend you're embedding something more stable than anko and not everybody loves javascript as a golang extension language.)

Arthur Del Esposte: GSoC Status Update - First Month

18 June, 2018 - 10:00

In the past month I have been working on my GSoC project in Debian’s Distro Tracker. This project aims at designing and implementing new features in Distro Tracker to better support Debian teams to track the health of their packages and to prioritize their work efforts. In this post, I will describe the current status of my contributions, highlight the main challenges, and point the next steps.

Work Management and Communication

I communicate with Lucas Kanashiro (my mentor) constantly via IRC and personally at least once a week as we live in the same city. We have a weekly meeting with Raphael Hertzog at #debian-qa IRC channel to report advances, collect feedback, solve technical doubts, and planning the next steps.

I created a new repository in Salsa to save the log of our IRC meetings and to track my tasks through the repository’s issue tracker.Besides that, once a month I’ll post a new status update in my blog, such as this one, with more details regarding my contributions.

Advances

When GSoC officially started, Distro Tracker already had some team-related features. Briefly, a team is an entity composed by one or more users that are interested in the same set of packages. Teams are created manually by users and anyone may join public teams. The team page aggregates some basic information about the team and the list of packages of interest.

Distro Tracker offers a page to enable users to browser public teams which shows a paginated, sorted list of names. It used to be hard to find a team based on this list since Distro Tracker has more 110 teams distributed over 6 pages. In this sense, I created a new search field with auto-complete on the top of teams page to enable users to find a team’s page faster, as show in the following figure:

Also, I have been working on improving the current teams infrastructure to enable Debian’s teams to better track the health of their packages. Initially, we decided to use the current data available in Distro Tracker to create the first version of a new team’s page based on PET.

Presenting team’s packages data in a table on the team’s page would be a relatively trivial task. However, Distro Tracker architecture aims to provide a generic core which can be extended through specific distro applications, such as Kali Linux. The core source code provides generic infrastructure to import data related to deb packages and also to present them in HTML pages. Therefore, we had to consider this Distro Tracker requirement to properly provide a extensible infrastructure to show packages data through tables in so that it would be easy to add new table fields and to change the default behavior of existing columns provided by the core source code.

So, based on the previously existing panels feature and on Hertzog’s suggestions, I designed and developed a framework to create customizable package tables for teams. This framework is composed of two main classes:

  • BaseTableField - A base class representing fields to be displayed on package tables. Among other things, it must define the column name and a template to render the cell content for a package.
  • BasePackageTable - A base class representing package tables which are displayed on a team page. It may have several BaseTableFields to display package’s information. Different tables may show a different list of packages based on its scope.

We have been discussing my implementation in an open Merge Request, although we are very close to the version that should be incorporated. The following figures show the comparison between the earlier PET’s table and our current implementation.

PET Packages Table Distro Tracker Packages Table

Currently, the team’s page only have one table, which displays all packages related to that team. We are already presenting a very similar set of data to PET’s table. More specifically, the following columns are shown:

  • Package - displays the package name on the cell. It is implemented by the core’s GeneralInformationTableField class
  • VCS - by default, it displays the type of package’s repository (i.e. GIT, SVN) or Unknown. It is implemented by the core’s VcsTableField class. However, Debian app extend this behavior by adding the changelog version on the latest repository tag and displaying issues identified by Debian’s VCS Watch.
  • Archive - displays the package version on distro archive. It is implemented by the core’s ArchiveTableField class.
  • Bugs - displays the total number of bugs of a package. It is implemented by the core’s BugsTableField class. Ideally, each third-party apps should extend this field table to both add links for their bug tracker system.
  • Upstream - displays the upstream latest version available. This is a specific table field implemented by Debian app since this data is imported through Debian-specific tasks. In this sense, it is not available for other distros.

As the table’s cells are small to present detailed information, we have added Popper.js, a javascript library to display popovers. In this sense, some columns show a popover with more details regarding its content which is displayed on mouse hover. The following figure shows the popover to the Package column:

In additional to designing the table framework, the main challenge were to avoid the N+1 problem which introduces performance issues since for a set of N packages displayed in a table, each field element must perform 1 or more lookup for additional data for a given package. To solve this problem, each subclass of BaseTableField must define a set of Django’s Prefetch objects to enable BasePackageTable objects to load all required data in batch in advance through prefetch_related, as listed bellow.

class BasePackageTable(metaclass=PluginRegistry):
    @property
    def packages_with_prefetch_related(self):
        """
        Returns the list of packages with prefetched relationships defined by
        table fields
        """
        package_query_set = self.packages
        for field in self.table_fields:
            for l in field.prefetch_related_lookups:
                package_query_set = package_query_set.prefetch_related(l)

        additional_data, implemented = vendor.call(
            'additional_prefetch_related_lookups'
        )
        if implemented and additional_data:
            for l in additional_data:
                package_query_set = package_query_set.prefetch_related(l)
        return package_query_set

    @property
    def packages(self):
        """
        Returns the list of packages shown in the table. One may define this
        based on the scope
        """
        return PackageName.objects.all().order_by('name')


class ArchiveTableField(BaseTableField):
    prefetch_related_lookups = [
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='general'),
            to_attr='general_archive_data'
        ),
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='versions'),
            to_attr='versions'
        )
    ]

    @cached_property
    def context(self):
        try:
            info = self.package.general_archive_data[0]
        except IndexError:
            # There is no general info for the package
            return

        general = info.value

        try:
            info = self.package.versions[0].value
            general['default_pool_url'] = info['default_pool_url']
        except IndexError:
            # There is no versions info for the package
            general['default_pool_url'] = '#'

        return general

Finally, it is worth noticing that we also improved the team’s management page by moving all team management features to a single page and improving its visual structure:

Next Steps

Now, we are moving towards adding other tables with different scopes, such as the tables presented by PET:

To this end, we will introduce the Tag model class to categorize the packages based on their characteristics. Thus, we will create an additional task responsible for tagging packages based on their available data. The relationship between packages and tags should be ManyToMany. In the end, we want to perform a simple query to define the scope of a new table, such as the following example to query all packages with Release Critical (RC) bugs:

class RCPackageTable(BasePackageTable):
    def packages(self):
      tag = Tag.objects.filter(name='rc-bugs')
      return tag.packages.all()

We probably will need to work on Debian’s VCSWatch to enable it to receive update through Salsa’s webhook, especially for real-time monitoring of repositories.

Let’s get moving on! \m/

Clint Adams: Before the combination with all the asterisks

18 June, 2018 - 04:49

We assembled at the rally point on the wrong side of the tracks. When consensus was achieved, we began our march to the Candy Kingdom. Before we had made it even a single kilometer, a man began yelling at us.

„It’s not here,” he exclaimed. “It’s that way.”

This seemed incredible. It became apparent that, despite his fedora, he was probably the King of Ooo.

Nevertheless, we followed him in the direction he indicated. He did not offer us space in his vehicle, but we managed to catch up eventually.

„It’s to the right of the cafe. Look for сиська,” he announced.

It occurred to me that the only sign I had seen that said сиська was right by where he had intercepted us. It also occurred to me that the cafe had three sides, and “right” was rather ambiguous.

There was much confusion until the Banana Man showed up.

Posted on 2018-06-17 Tags: mintings

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้