Since the last release of ghc-heap-view, which was compatible with GHC-7.6, I got 8 requests for a GHC-7.8 compatible version. I started working on it in January, but got stuck and then kept putting it off.
Today, I got the ninths request, and I did not want to wait for the tenth, so I finally finished the work and you can use the new ghc-heap-view-0.5.2 with GHC-7.8.
I used this chance to migrate its source repository from Darcs to git (mirrored on GitHub), so maybe this means that when 7.10 comes out, the requests to update it come with working patches :-). I also added a small test script so that travis can check it:
I did not test it very thoroughly yet. In particular, I did not test whether ghc-vis works as expected.
I still think that the low-level interface that ghc-heap-view creates using custom Cmm code should move into GHC itself, so that it does not break that easily, but I still did not get around to propose a patch for that.
It’s been a while since someone actually touched the underlaying authentication infrastructure that powers the GNOME machines. The very first setup was originally configured by Jonathan Blandford (jrb) who configured an OpenLDAP istance with several customized schemas. (pServer fields in the old CVS days, pubAuthorizedKeys and GNOME modules related fields in recent times)
While OpenLDAP-server was living on the GNOME machine called clipboard (aka ldap.gnome.org) the clients were configured to synchronize users, groups, passwords through the nslcd daemon. After several years Jeff Schroeder joined the Sysadmin Team and during one cold evening (date is Tue, February 1st 2011) spent some time configuring SSSD to replace the nslcd daemon which was missing one of the most important SSSD features: caching. What surely convinced Jeff to adopt SSSD (a very new but promising sofware at that time as the first release happened right before 2010’s Christmas) and as the commit log also states (“New sssd module for ldap information caching”) was SSSD’s caching feature.
It was enough for a certain user to log in once and the ‘/var/lib/sss/db’ directory was populated with its login information preventing the LDAP daemon in charge of picking up login details (from the LDAP server) to query the LDAP server itself every single time a request was made against it. This feature has definitely helped in many occasions especially when the LDAP server was down for a particular reason and sysadmins needed to access a specific machine or service: without SSSD this wasn’t ever going to work and sysadmins were probably going to be locked out from the machines they were used to manage. (except if you still had ‘/etc/passwd’, ‘/etc/group’ and ‘/etc/shadow’ entries as fallback)
Things were working just fine except for a few downsides that appeared later on:
- the web interface (view) on our LDAP user database was managed by Mango, an outdated tool which many wanted to rewrite in Django that slowly became a huge dinosaur nobody ever wanted to look into again
- the Foundation membership information were managed through a MySQL database, so two databases, two sets of users unrelated to each other
- users were not able to modify their own account information on their own but even a single e-mail change required them to mail the GNOME Accounts Team which was then going to authenticate their request and finally update the account.
Today’s infrastructure changes are here to finally say the issues outlined at (1, 2, 3) are now fixed.What has changed?
The GNOME Infrastructure is now powered by Red Hat’s FreeIPA which bundles several FOSS softwares into one big “bundle” all surrounded by an easy and intuitive web UI that will help users update their account information on their own without the need of the Accounts Team or any other administrative entity. Users will also find two custom fields on their “Overview” page, these being “Foundation Member since” and “Last Renewed on date”. As you may have understood already we finally managed to migrate the Foundation membership database into LDAP itself to store the information we want once and for all. As a side note it might be possible that some users that were Foundation members in the past won’t find any detail stored on the Foundation fields outlined above. That is actually expected as we were able to migrate all the current and old Foundation members that had an LDAP account registered at the time of the migration. If that’s your case and you still would like the information to be stored on the new setup please get in contact with the Membership Committee at stating so.Where can I get my first login credentials?
Let’s make a little distinction between users that previously had access to Mango (usually maintainers) and users that didn’t. If you were used to access Mango before you should be able to login on the new Account Management System by entering your GNOME username and the password you were used to use for loggin in into Mango. (after loggin in the very first time you will be prompted to update your password, please choose a strong password as this account will be unique across all the GNOME Infrastructure)
If you never had access to Mango, you lost your password or the first time you read the word Mango on this post you thought “why is he talking about a fruit now?” you should be able to reset it by using the following command:
ssh -l yourgnomeuserid account.gnome.org
The command will start an SSH connection between you and account.gnome.org, once authenticated (with the SSH key you previously had registered on our Infrastructure) you will trigger a command that will directly send your brand new password on the e-mail registered for your account. From my tests seems GMail sees the e-mail as a phishing attempt probably because the body contains the word “password” twice. That said if the e-mail won’t appear on your INBOX, please double-check your Spam folder.Now that Mango is gone how can I request a new account?
With Mango we used to have a form that automatically e-mailed the maintainer of the selected GNOME module which was then going to approve / reject the request. From there and in the case of a positive vote from the maintainer the Accounts Team was going to create the account itself.
With the recent introduction of a commit robot directly on l10n.gnome.org the number of account requests reduced its numbers. In addition to that users will now be able to perform pretty much all the needed maintenance on their accounts themselves. That said and while we will probably work on building a form in the future we feel that requesting accounts can definitely be achieved directly by mailing the Accounts Team itself which will mail the maintainer of the respective module and create the account. As just said the number of account creations has become very low and the queue is currently clear. The documentation has been updated to reflect these changes at:
The migration of all the user data and ACLs has been massive and I’ve been spending a lot of time reviewing the existing HBAC rules trying to spot possible errors or misconfigurations. If you happen to not being able to access a certain service as you were used to in the past, please get in contact with the Sysadmin Team. All the possible ways to contact us are available at https://wiki.gnome.org/Sysadmin/Contact.What is missing still?
Now that the Foundation membership information has been moved to LDAP I’ll be looking at porting some of the existing membership scripts to it. What I managed to port already are welcome e-mails for new or existing members. (renewals)
Next step will be generating a membership page from LDAP (to populate http://www.gnome.org/foundation/membership) and all the your-membership-is-going-to-lapse e-mails that were being sent till today.Other news – /home/users mount on master.gnome.org
You will notice that loggin in into master.gnome.org will result in your home directory being empty, don’t worry, you did not lose any of your files but master.gnome.org is now currently hosting your home directories itself. As you may have been aware of adding files to the public_html directory on master resulted in them appearing on your people.gnome.org/~userid space. That was unfortunately expected as both master and webapps2 (the machine serving people.gnome.org’s webspaces) were mounting the same GlusterFS share.
We wanted to prevent that behaviour to happen as we wanted to know who has access to what resource and where. From today master’s home directories will be there just as a temporary spot for your tarballs, just scp and use ftpadmin against them, that should be all you need from master. If you are interested in receiving or keeping using your people.gnome.org’s webspace please mail stating so.Other news – a shiny and new error 500 page has been deployed
Thanks to Magdalen Berns (magpie) a new error 500 web page has been deployed on all the Apache istances we host. The page contains an iframe of status.gnome.org and will appear every single time the web server behind the service you are trying to reach will be unreachable for maintenance or other purposes. While I hope you won’t see the page that often you can still enjoy it at https://static.gnome.org/error-500/500.html. Make sure to whitelist status.gnome.org on your browser as it currently loads it without https. (as the service is currently hosted on OpenShift which provides us with a *.rhcloud.com wildcard certificate, which differs from the CN the browser would expect it to be)
I spent the weekend using almost exclusively my Chromebook 13, on a single charge Saturday and Sunday.Keyboard
I think I like the keyboard better now than I used to when I first tried it. It gets nowhere near the ThinkPad X230 one, though; appart from the coating, which my (backlit) X230 unfortunately does not have.Screen
While the screen appeared very grainy to me on first sight, having only used IPS screens in the past year, I got used to it over the weekend. I now do not notice much graininess anymore. The contrast still seems extremely poor, the colors are not vivid, and the vertical viewing angles are still a disaster, though.Battery life
I think the battery life is awesome. I have 30% remaining now while I am writing this blog post and Chrome OS tells me I still have 3 hours and 19 minutes remaining. It could probably still be improved though, I notice that Chrome OS uses 7-14% CPU in idle normally (and up to 20% in exceptional cases).
The maximum power usage I measured using the battery’s internal sensor was about 9.2W, that was with 5 Big Buck Bunny 1080p videos played in parallel. Average power consumption is around 3-5W (up to 6.5 with single video playing), depending on brightness, and use.Performance
While I do notice a performance difference to my much more high-end Ivy Bridge Core i5 laptop, it turns out to be usable enough to not make me want to throw it at a wall. Things take a bit longer than I am used to, but it is still acceptable.Input: Software Part
The user interface is great. There are a lot of gestures available for navigating between windows, tabs, and in the history. For example, horizontally swiping with two finger moves in history, three fingers moves between tabs; and swiping down (or up for Australian scrolling) gives an overview of all windows (like expose on Mac, GNOME’s activities, or the multi-tasking thing Maemo used to have).
What I miss is a keyboard shortcut like Meta + Left/Right on GNOME which moves the active window to the left/right side of the screen. That would be very useful for mult-tasking situations.Issues
I noticed some performance issues. For example, I can easily get the Chromebook to use 85% of a CPU by scrolling on a page with the touchpad or 70% for scrolling by keeping a key pressed (crbug.com/420452).
While watching Big Buck Bunny on YouTube, I noticed some (micro) stuttering in the beginning of the film, as well as each time I move in or out of the video area when not in full-screen mode (crbug.com/420582). It also increases CPU usage to about 70%.Running a “proper” Linux?
Today, I tried to play around a bit with Debian wheezy and Ubuntu trusty systems, in a chroot for now. I was trying to find out if I can get an accelerated X server with the standard ChromeOS kernel. The short answer is: No. I tried two things:
- Debian wheezy with the binaries from ChromeOS (they have the same xserver version)
- Ubuntu trusty with the Nvidia drivers
Unfortunately, they did not work. Option 1 failed because ChromeOS uses glibc 2.15 whereas wheezy uses 1.13. Option 2 failed because the sysfs interface is different between the ChromeOS and Linux4Tegra kernels.
I guess I’ll have to wait.
I also tried booting a custom kernel from USB, but given that the u-boot always sets console= and there is no non-verified u-boot available yet, I could not see any output on the screen :( – Maybe I should build a u-boot myself?
Filed under: Chromebook
jecode.org is a nice initiative by, among others, my fellow Debian developer and university professor Martin Quinson. The goal of jecode.org is to raise awareness about the importance of learning the basics of programming, for everyone in modern societies. jecode.org targets specifically francophone children (hence the name, for "I code").
I've been happy to contribute to the initiative with my thoughts on why learning to program is so important today, joining the happy bunch of "codeurs" on the web site. If you read French, you can find them reposted below. If you also write French, you might want to contribute your thoughts on the matter. How? By forking the project of course!Pourquoi codes-tu ?
Tout d'abord, je code parce que c'est une activité passionnante, drôle, et qui permet de prouver le plaisir de créer.
Deuxièmement, je code pour automatiser les taches répétitives qui peuvent rendre pénibles nos vies numériques. Un ordinateur est conçu exactement pour cela: libérer les êtres humains des taches stupides, pour leur permettre de se concentrer sur les taches qui ont besoin de l'intelligence humaine pour être résolues.
Mais je code aussi pour le pur plaisir du hacking, i.e., trouver des utilisations originelles et inattendues pour des logiciels existants.Comment as-tu appris ?
Complètement au hasard, quand j'étais gamin. À 7 ou 8 ans, je suis tombé dans la bibliothèque municipale de mon petit village, sur un livre qui enseignait à programmer en BASIC à travers la métaphore du jeu de l'oie. À partir de ce jour j'ai utilisé le Commodore 64 de mon père beaucoup plus pour programmer que pour les jeux vidéo: coder est tellement plus drôle!
Plus tard, au lycée, j'ai pu apprécier la programmation structurée et les avantages énormes qu'elle apporte par rapport aux GO TO du BASIC et je suis devenu un accro du Pascal. Le reste est venu avec l'université et la découverte du Logiciel Libre: la caverne d'Ali Baba du codeur curieux.Quel est ton langage préféré ?
J'ai plusieurs langages préférés.
J'aime Python pour son minimalisme syntactique, sa communauté vaste et bien organisée, et pour l'abondance des outils et ressources dont il dispose. J'utilise Python pour le développement d'infrastructures (souvent équipées d'interfaces Web) de taille moyenne/grande, surtout si j'ai envie des créer une communauté de contributeurs autour du logiciel.
J'aime OCaml pour son système de types et sa capacité de capturer les bonnes propriétés des applications complexes. Cela permet au compilateur d'aider énormément les développeur à éviter des erreurs de codage comme de conception.
J'utilise aussi beaucoup Perl et le shell script (principalement Bash) pour l'automatisation des taches: la capacité de ces langages de connecter d'autres applications est encore inégalée.Pourquoi chacun devrait-il apprendre à programmer ou être initié ?
On est de plus en plus dépendants des logiciels. Quand on utilise une lave-vaisselle, on conduit une voiture, on est soigné dans un hôpital, quand on communique sur un réseau social, ou on surfe le Web, nos activités sont constamment exécutées par des logiciels. Celui qui contrôle ces logiciels contrôle nos vies.
Comme citoyens d'un monde qui est de plus en plus numérique, pour ne pas devenir des esclaves 2.0, nous devons prétendre le contrôle sur le logiciel qui nous entoure. Pour y parvenir, le Logiciel Libre---qui nous permet d'utiliser, étudier, modifier, reproduire le logiciel sans restrictions---est un ingrédient indispensable. Aussi bien qu'une vaste diffusion des compétences en programmation: chaque bit de connaissance dans ce domaine nous rende tous plus libres.
I decided I’d post this monthly. It may be a bit boring, sorry, but I think it’s a nice thing to have this public. The log starts on the 6th, because on the 4th I was back from Debconf (after a day in San Francisco, plus 20 hours of traveling and 15 hours of time gap).
Saturday 6th & Sunday 7th:
– packaged libjs-twitter-bootstrap-wizard (in new queue)
– Uploaded python-pint after reviewing the debian/copyright
– Worked on updating python-eventlet in Experimental, and adding Python3 support. It seems Python3 support isn’t ready yet, so I will probably remove that feature from the package update.
– Tried to apply the Django 1.7 patches for python-django-bootstrap-form. They didn’t work, but Raphael came back on Monday morning with new versions
of the patches, which should be good this time.
– Helped the DSA (Debian System Administrators) with the Debian OpenStack cloud. It’s looking good and working now (note: I helped them during Debconf 14).
– Started a page about adding more tasksel tasks: https://wiki.debian.org/tasksel/MoreTasks. It’s looking like Joey Hess is adding new tasks by default in Tasksel, with “OpenStack compute node” and “OpenStack proxy node”. It will be nice to have them in the default Debian installer! :)
– Packaged and uploaded python-dib-utils, now in NEW queue.
– Uploaded fixed python-django-bootstrap-form with patch for Django 1.7.
– Packaged and uploaded python-pysaml2.
– Finilized and uploaded python-jingo which is needed for python-django-compressor unit tests
– Finalized and uploaded python-coffin which is needed for python-django-compressor unit tests
– Worked on running the unit tests for python-django-compressor, as I needed to know if it could work with Django 1.7. It was hard to find the correct way to run the unit tests, but finally, they all passed. I will add the unit tests once coffin and jingo will be accepted in Sid.
– Applied patches in the Debian BTS for python-django-openstack-auth and Django 1.7. Uploaded the fixed package.
– Fixed python-django-pyscss compat with Django 1.7, uploaded the result.
– Updated keystone to Juno b3.
– Built Wheezy backports of some JS libs needed for Horizon in Juno, which I already uploaded to Sid last summer:
– Upstreamed the Django 1.7 patch for python-django-openstack-auth:
– Updated and uploaded Swift 2.1.0. Added swift-object-expirer package to it, together with init script.
Basically, cleaned the Debian BTS of almost all issues today… :P
– Added it.po update to nova (Closes: #758305).
– Backported libvirt 1.2.7 to Wheezy, to be able to close this bug: https://bugs.debian.org/757548 (eg: changed dependency from libvirt-bin to libvirt-daemon-system)
– Uploaded the fixed nova package using libvirt-daemon-system
– Upgraded python-trollius to 1.0.1
– Fixed tuskar-ui to work with Django 1.7. Disabled pep8 tests during build. Added build-conflicts: python-unittest2.
– Fixed python-django-compressor for Django 1.7, and now running unit tests with it, after python-coffin and python-jingo got approved in Sid by FTP masters.
– Fixed python-xstatic wrong upstream URLs.
– Added it.po debconf translation to Designate.
– Added de.po debconf translation to Tuskar.
– Fixed copyright holders in python-xstatic-rickshaw
– Added python-passlib as dependency for python-cinder.
Remaining 3 issues in the BTS: ceilometer FTBFS, Horizon unit test with Django 1.7, Designate fail to install. All of the 3 are harder to fix, and I may try to do so later this week.
– Fixed python-xstatic-angular and python-xstatic-angular-mock to deal with the new libjs-angularjs version (closes 2 Debian RC bugs: uninstallable).
– Fixed ceilometer FTBFS (Closes rc bug)
– Fixed wrong copyright file for libjs-twitter-bootstrap-wizard after the FTP masters told me, and reuploaded to Sid.
– Reuploaded wrong upload of ceilometer (wrong hash for orig.tar.xz)
– Packaged and uploaded python-xstatic-bootstrap-scss
– Packaged and uploaded python-xstatic-font-awesome
– Packaged and uploaded ntpstat
– packaged and uploaded python-xstatic-jquery.bootstrap.wizard
– Fixed python-xstatic-angular-cookies to use new libjs-angularjs version (fixed version dependencies)
– Fixed Ceilometer FTBFS (Closes: #759967)
– Backported all python-xtatic packages to Wheezy, including all dependencies. This includes backporting of a bunch of packages from nodejs which were needed as build-dependencies (around 70 packages…). Filed about 5 or 6 release critical bugs as some nodejs packages were not buildable as-is.
– Fixed some too restrictive python-xstatic-angular* dependencies on the libjs-angularjs (the libjs-angularjs increased version).
– Uploaded updates to Experimental:
o python-eventlet 0.15.2 (this one took a long time as it needed maintenance)
– Uploaded to Sid:
o python-diskimage-builder 0.1.30-1
o python-django-pyscss 1.0.2-1
– Fixed horizon libapache-mode-wsgi to be a dependency of openstack-dashboard-apache and not just openstack-dashboard (in both Icehouse & Juno).
– Removed the last failing Django 1.7 unit test from Horizon. It doesn’t seem relevant anyway.
– Backported python-netaddr 0.7.12 to Wheezy (needed by oslo-config).
– Started working on oslo.rootwrap, though it failed to build in Wheezy with a unit test failure.
– To experimental:
o Uploaded oslo.rootwrap 220.127.116.11~a1. It needed a build-depends on iproute2 because of a new test.
o Uploaded python-oslo.utils 0.3.0
o Uploaded python-oslo.vmware 0.6.0, fixed sphinx-build conf.py and filed a bug about it: https://bugs.launchpad.net/oslo.vmware/+bug/1370370 plus emailed the commiter of the issue (which appeared 2 weeks ago).
o Uploaded python-pycadf 0.6.0
o Uploaded python-pyghmi 0.6.17
o Uploaded python-oslotest 18.104.22.168~a2, including patch for Wheezy, which I also submited upstream: https://review.openstack.org/122171/
o Uploaded glanceclient 0.14.0, added a patch to not use the embedded version of urllib3 in requests: https://review.openstack.org/122184
– To Sid:
o Uploaded python-zake_0.1.6-1
– Backported zeromq3-4.0.4+dfs, pyzmq-14.3.1, pyasn1-0.1.7, python-pyasn1-modules-0.0.5
– Uploaded keystoneclient 0.10.1, fixed the saml2 unit tests which were broken using testtools >= 0.9.39. Filed bug, and warned code author: https://bugs.launchpad.net/python-keystoneclient/+bug/1371085
– Uploade swiftclient 2.3.0 to experimental.
– Uploaded ironicclient 0.2.1 to experimental.
– Uploaded saharaclient, filed bug with saharaclient expecting an up and running keystone server: https://bugs.launchpad.net/python-saharaclient/+bug/1371177
– Uploaded keystone Juno b3, filed but about unit tests downloading with git, while no network access should be performed during package build (forbidden by
– Uploaded python-oslo.db 1.0.0 which I forgot in the dependency list, and which was needed for Neutron.
– Uploaded nova 2014.2~b3-1 (added a new nova-serialproxy service daemon to the nova-consoleproxy)
– Uploaded Neutron Juno b3.
– Uploaded python-retrying 1.2.3 (was missing from depends upload)
– Uploaded Glance Juno b3.
– Uploaded Cinder Juno b3.
– Fixed python-xstatic-angular-mock which had a .pth packaged, as well as the data folder (uploaded debian release -3).
– Fixed missing depends and build-conflicts in python-xstatic-jquery.
– Dropped python-pil & python-django-discover-runner from runtime Depends: of python-django-pyscss, as it’s only needed for tests. It also created a conflicts, because python-django-discover-runner depends on python-unittest2 and horizon build-conflicts with it.
– Forward-ported the Django 1.7 patches for Horizon. Opened new patch: https://review.openstack.org/122992 (since the old fix has gone away after a refactor of the unit test).
– Uploaded Horizon Juno b3.
– Applied https://review.openstack.org/#/c/122768/ to the keystone package, so that it doesn’t do “git clone” of the keystoneclient during build.
– Uploaded oslo.messaging 22.214.171.124 (which really is 1.4.0) to experimental
– Uploaded oslo.messaging 126.96.36.199+really+1.3.1-1 to fix the issue in Sid/Jessie after the wrong upload (due to Zul wrong tagging of Keystone in the 2014.1.2 point release).
– Uploaded ironic 2014.2~b3-1 to experimental
– Uploaded heat 2014.2~b3-1 (with some fixes for sphinx doc build)
– Uploaded ceilometer 2014.2~b3-1 to experimental
– Uploaded openstack-doc-tools 0.19-1 to experimental
– Uploaded openstack-trove 2014.2~b3-1 to experimental
– Uploaded python-neutronclient with fixed version number for cliff and six. This missing requirement for cliff version produced an error in Trove, which I don’t want to happen again.
– Added fix for unit tests in Trove: https://review.openstack.org/#/c/123450/1,publish
– Uploaded oslo.messaging 1.4.1 in Experimental, fixing the version conflicts with the one in Sid/Jessie. Thanks to Doug Hellman for doing the tagging. I will need to upload new versions of the following packages with the >= 1.4.1 depends:
> – ceilometer
> – ironic
> – keystone
> – neutron
> – nova
> – oslo-config
> – oslo.rootwrap
> – oslo.i18n
> – python-pycadf
See http://lists.openstack.org/pipermail/openstack-dev/2014-September/046795.html for more explanation about the mess I’m repairing…
– Uploaded designate Juno b3.
– Uploaded oslosphinx 188.8.131.52
– Uploaded update to django-openstack-auth (new last minute requirement for Horizon).
– Uploaded final oslo-config package version 184.108.40.206 (really is 1.4.0)
– Packaged and uploaded Sahara. This needs some tests by someone else as I don’t even know how it works.
– Uploaded python-keystonemiddleware 1.0.0-3, fixing CVE-2014-7144] TLS cert verification option not honoured in paste configs. https://bugs.debian.org/762748
– Packaged and uploaded python-yaql, sent pull request for fixing print statements into Python3 compatible print function calls: https://github.com/ativelkov/yaql/pull/15
– Packaged and uploaded python-muranoclient.
– Started the packaging of Murano (not finished yet).
– Uploaded python-keystoneclient 0.10.1-2 with the CVE-2014-7144 fix to Sid, with urgency=high. Uploaded 0.11.1-1 to Experimental.
– Uploaded python-keystonemiddleware fix for CVE-2014-7144.
– Uploaded openstack-trove 2014.2~b3-3 with last unit test fix from https://review.openstack.org/#/c/123450/
– Uploaded a fix for murano-agent, which makes it run as root.
– Finished the packaging of Murano
– Started packaging murano-dashboard, sent this patch to fix the wrong usage of the /usr/bin/coverage command: https://review.openstack.org/124444
– Fixed wrong BASE_DIR in python-xstatic-angular and python-xstatic-angular-mock
– uploaded python-xstatic-boostrap-scss which I forgot to upload… :(
– uploaded python-pyscss 1.2.1
– After a long investigation, I found out that the issue when installing the openstack-dasboard package was due to a wrong patch I did for Python 3.2 in Wheezy in python-pyscss. Corrected the patch from version 1.2.1-1, and uploaded version 1.2.1-2, the dashboard now installs correctly. \o/
– Did a new version of an Horizon patch at https://review.openstack.org/122992/ to address Django 1.7 compat.
– Uploaded new version of python-pyscss fixing the last issue with Python 3 (there was a release critical bug on it).
– Uploaded fixup for python-django-openstack-auth fail to build in the Sid version, which was broken since the last upload of keystoneclient (which makes some of its API now as private).
– Uploaded python-glance-store 0.1.8, including Ubuntu patch to fix unit tests.
– Reviewed the packaging of python-strict-rfc3339 (see https://bugs.debian.org761152).
– Uploaded Sheepdog with fix in the init script to start after corosync (Closes: #759216).
– Uploaded pt_BR.po Brazilian Portuguese debconf templates translation for nova Icehouse in Sid (only commited it in Git for Juno).
– Same for Glance.
– Added Python3 support in python-django-appconf, uploaded to Sid
– Upgraded to python-django-pyscss 1.0.3, and fixed broken unit tests with this new release under Django 1.7. Created pull request: https://github.com/fusionbox/django-pyscss/pull/22
– Fixed designate requirements.txt in Sid (Icehouse) to allow SQLA 0.9.x. Uploaded resulting package to Sid.
– Uploaded new Debian fix for python-tooz: kills memcached only if the package scripts started it (plus cleans .testrepository on clean).
– Uploaded initial release of murano
– Uploaded python-retrying with patch from Ubuntu to remove embedded copy of six.py code.
– Uploaded python-oslo.i18n 1.0.0 to experimental (same as before, just bump of version #)
– Uploaded python-oslo.utils 1.0.0 to experimental (same as before, just bump of version #)
– Uploaded Keystone Juno RC1
– Uploaded Glance Juno RC1
Before I forget I had meant to write about a toy virtual machine which I'ce been playing with.
It is register-based with ten registers, each of which can hold either a string or int, and there are enough instructions to make it fun to use.
I didn't go overboard and write a complete grammer, or a real compiler, but I did do enough that you can compile and execute obvious programs.
First compile from the source to the bytecodes:
$ ./compiler examples/loop.in
Mmm bytecodes are fun:
$ xxd ./examples/loop.raw 0000000: 3001 1943 6f75 6e74 696e 6720 6672 6f6d 0..Counting from 0000010: 2074 656e 2074 6f20 7a65 726f 3101 0101 ten to zero1... 0000020: 0a00 0102 0100 2201 0102 0201 1226 0030 ......"......&.0 0000030: 0104 446f 6e65 3101 00 ..Done1..
Now the compiled program can be executed:
$ ./simple-vm ./examples/loop.raw [stdout] register R01 = Counting from ten to zero [stdout] register R01 = 9 [Hex:0009] [stdout] register R01 = 8 [Hex:0008] [stdout] register R01 = 7 [Hex:0007] [stdout] register R01 = 6 [Hex:0006] [stdout] register R01 = 5 [Hex:0005] [stdout] register R01 = 4 [Hex:0004] [stdout] register R01 = 3 [Hex:0003] [stdout] register R01 = 2 [Hex:0002] [stdout] register R01 = 1 [Hex:0001] [stdout] register R01 = 0 [Hex:0000] [stdout] register R01 = Done
There could be more operations added, but I'm pleased with the general behaviour, and embedding is trivial. The only two things that make this even remotely interesting are:
- Most toy virtual machines don't cope with labels and jumps. This does.
- Even though it was a real pain to go patching up the offsets.
- Having labels be callable before they're defined is pretty mandatory in practice.
- Most toy virtual machines don't allow integers and strings to be stored in registers.
- Now I've done that I'm not 100% sure its a good idea.
Anyway that concludes todays computer-fun.
Recently I had to reinstall a system at office with Debian Wheezy and I thought I should use this opportunity to experiment with LVM. Yeah I've not used LVM till date, even though I'm using Linux for more than 5 years now. I know many DD friends who use LVM with LUKS encryption and I always wanted to experiment, but since my laptop is only thing I've and its currently perfectly in shape I didn't dare to experiment it there. This reinstall was golden opportunity for me to experiment and learn something new.
I used Wheezy CD ISO downloaded using jigdo for installation. Now I will just go bit off topic and want to share the USB stick preparation. I have to say this because I had not done installation for quite a while now. Last I did was during Squeeze time so like usual I blindly executed following command.
cat debian-wheezy.iso > /dev/sdb
Surprisingly USB stick didn't boot! I was getting Corrupt or missing ISO.bin. So next I tried using dd for preparing.
dd if=debian-wheezy.iso of=/dev/sdb
Surprisingly this also didn't work and I get same error message as above. This is when I went back to debian manual and looked for installation step and there I found new way!
cp debian-wheezy.iso /dev/sdb
Look at destination, its a device and voilà this worked! This is something new I learnt and I'm surprised how easy it is now to prepare USB stick. But I still didn't get why first 2 methods failed!. If you guys know please do share.
Now coming back to LVM. I used default LVM when disk partitioning was asked, and I used guided partitioning method provided by debian-installer and ended up with following layout
$ lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert home system-disk -wi-ao-- 62.34g root system-disk -wi-ao-- 9.31g swap_1 system-disk -wi-ao-- 2.64g
So guided partitioning of debian-installer allocates 10G for root and rest to home and swap. This is not a problem but when I started installing required software, I could see root running out of space quickly so I wanted to resize root and give it 10G more, for this I need to reduce the home by 10G for which I need to first unmount the home partition. Unmounting home from running system isn't possible so I booted into recovery assuming I can unmount home there but I couldn't. lsof didn't show any one using /home after searching a bit I found fuser command and it looks like kernel is using /home which is mounted by it.
$ fuser -vm /home USER PID ACCESS COMMAND /home: root kernel mount /home
So it isn't possible to unmount /home in recovery mode also. Online materials told me to use live-cd for doing this but I didn't have patience to do that so I just went ahead commented /home mounting in /etc/fstab and rebooted!. This time it worked and /home is not mounted on recovery mode. Now comes the hard part resizing home, thanks to TLDP doc on reducing I coud do this with following step
# e2fsck -f /dev/volume-name/home # resize2fs /dev/volume-name/home 52G # lvreduce -L-10G /dev/volume-name/home
And now the next part live extending the root partition again thanks to TLDP doc on extending following command did it.
# lvextend -L+10G /dev/volume-name/root # resize2fs /dev/volumne-name/root
And now important part! Uncomment /home line in /etc/fstab so it will be mounted normally in next boot and reboot! On login I can see my partitions updated.
# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert home system-disk -wi-ao-- 52.34g root system-disk -wi-ao-- 19.31g swap_1 system-disk -wi-ao-- 2.64g
I've started liking LVM more now! :)
It’s been more than 2 years since my last post about my smartphone. In the time after that post I upgraded my much loved Windows Phone 7 device to Windows Phone 8 (which I got rid of within months, for sucking), briefly used Firefox OS, then eventually used a Nexus 4 for at least a year.
After years of terrible service provision and pricing, I decided I would not stay with my network Orange a moment longer – and in getting a new contract, I would get a new phone too. So on Friday, I signed up to a new £15 per month contract with Three, including 200 minutes, unlimited data, and 25GB of data roaming in the USA and other countries (a saving of £200,000 per month versus Orange). Giffgaff is similarly competitive for data, but not roaming. No other network in the UK is competitive.
For the phone, I had a shortlist of three: Apple iPhone 6, Sony Xperia Z3 Compact, and Samsung Galaxy Alpha. These are all “small” phones by 2014 standards, with a screen about the same size as the Nexus 4. I didn’t consider any Windows Phone devices because they still haven’t shipped a functional music player app on Windows Phone 8. Other more “fringe” OSes weren’t considered, as I insist on trying out a real device in person before purchase, and no other comparable devices are testable on the high street.iPhone 6
This was the weakest offering, for me. £120 more than the Samsung, and almost £200 more than the Sony, a much lower hardware specification, physically larger, less attractive, and worst of all – mandatory use of iTunes for Windows for music syncing.
The only real selling point for me would be for access to iPhone apps. And, I guess, decreased chance of mockery by co-workers.Galaxy Alpha
Now on to the real choices. I’ve long felt that Samsung’s phones are ugly plasticy tat – the Galaxy S5 is popular, well-marketed, but looks and feels cheap compared to HTC’s unibody aluminium One. They’ve also committed the cardinal sin of gimping the specifications of their “mini” (normal-sized) phones, compared to the “normal” (gargantuan) versions. The newly released S5 Mini is about the same spec as early 2012’s S3, the S4 Mini was mostly an S2 internally, and so on.
However, whilst HTC have continued along these lines, Samsung have finally released a proper phone under 5″, in the Alpha.
The Alpha combines a 4.7″ AMOLED screen, a plastic back, metal edges, 8-core big.LITTLE processor, and 2GB RAM. It is a PRETTY device – the screen really dazzles (as is the nature of OLED). It feels like a mix of design cues from an iPhone and Samsung’s own, keeping the angular feel of iPhone 4->5S rather than the curved edges on the iPhone 6.
The Galaxy Alpha was one of the two devices I seriously considered.Xperia Z3 Compact
The other Android device I considered was the Compact version of Sony’s new Xperia Z3. Unlike other Android vendors, Sony decided that “mini” shouldn’t mean “low end” when they released the Z1 compact earlier this year. The Z3 follows suit, where the same CPU and storage are found on both the big and little versions.
The Z3C has a similar construction to the Nexus 4, with glass front and back, and plastic rim. The specification is similar to the Galaxy Alpha (with a quadcore 2.5GHz Qualcomm processor about 15% faster than the big.LITTLE Exynos in the Galaxy Alpha). It differs in a few places – LCD rather than AMOLED (bad); a non-removable (bad) 2600 mAh battery (good) compared to the removable 1860 mAh in the Samsung; waterproofing (good); A less hateful Android shell (Xperia on Android vs Samsung Touchwiz).
For those considering a Nexus-4-replacement class device (yes, rjek, that means you), both the Samsung and the Sony are worth a look. They both have good points and bad points. In the end, both need to be tested to form a proper opinion. But for me, the chunky battery and tasteful green were enough to swing it for the Sony. So let’s see where I stand in a few months’ time. Every phone I’ve owned, I’ve ended up hating it for one reason or another. My usual measure for whether a phone is good or not is how long it takes me to hit the “I can’t use this” limit. The Nokia N900 took me about 30 minutes, the Lumia 800 lasted months. How will the Z3 Compact do? Time will tell.
Today I came across an unexpected Ubuntu boot screen. Above the bread shelf on the ICA shop at Storo in Oslo, the grub menu of Ubuntu with Linux kernel 3.2.0-23 (ie probably version 12.04 LTS) was stuck on a screen normally showing the bread types and prizes:
If it had booted as it was supposed to, I would never had known about this hidden Linux installation. It is interesting what errors can reveal.
We returned safely from Kraków, despite a somewhat turbulent flight home.
There were many pictures taken, but thus far I've only posted a random night-time shot. Perhaps more will appear in the future.
The release contains a bunch of minor fixes, and some new facilities relating to templates.
It seems likely that in the future there will be the ability to create "static pages" along with the blog-entries, tag-clouds & etc. The suggestion was raised on the github issue tracker and as a proof of concept I hacked up a solution which works entirely via the chronicle plugin-system, proving that the new development work wasn't a waste of time - especially when combined with the significant speedups in the new codebase.
(ObRandom: Mailed the Debian package-mmaintainer to see if there was interest in changing. Also mailed a couple of people I know who are using the old code to see if they had comments on the new code, or had any compatibility issues. No replies from either, yet. *shrugs*)
The lsdvd project got a new set of developers a few weeks ago, after the original developer decided to step down and pass the project to fresh blood. This project is now maintained by Petter Reinholdtsen and Steve Dibb.
- Ignore 'phantom' audio, subtitle tracks
- Check for garbage in the program chains, which indicate that a track is non-existant, to work around additional copy protection
- Fix displaying content type for audio tracks, subtitles
- Fix pallete display of first entry
- Fix include orders
- Ignore read errors in titles that would not be displayed anyway
- Fix the chapter count
- Make sure the array size and the array limit used when initialising the palette size is the same.
- Fix array printing.
- Correct subsecond calculations.
- Add sector information to the output format.
- Clean up code to be closer to ANSI C and compile without warnings with more GCC compiler warnings.
This change bring together patches for lsdvd in use in various Linux and Unix distributions, as well as patches submitted to the project the last nine years. Please check it out. :)
I’ve just applied to be a (non uploading) Debian Developer. I’ve just filled in the form, and decrypted the message that I received to confirm my application (I had read the important documents long time ago, and again, some weeks ago, and again, some days ago).
I was expecting today to gather some GPG signs, but the event was cancelled (postponed). So beginning next week, I’ll try to gather GPG signs one by one, by myself.
Outdated translations of the website are finished (no more yellow stickers in the Spanish http://www.debian.org!), and I already began with the translation of new files.
I’ve sent mails to say thank you to some of the people that helped me during this phase of Debian Contributor.
I think I’ve done everything that I can do for now. So let’s wait.
I don’t know how will I sleep tonight.
You can comment in this pump.io thread.
Filed under: My experiences and opinion Tagged: Communities, Contributing to libre software, Debian, encryption, English, Free Software, Moving into free software, translations
Today I want to talk about the different approaches of Elektra and Config::Model. We have gotten a lot of questions lately about why Elektra is necessary and what differentiates it between similar tools like Config::Model. While there are a lot of similarities between Config::Model and Elektra, there are some key differences and that is what I will be focusing on in this post.
Once a specification is defined for Elektra and a plug-in is written to work with that specification, other developers will be able to reuse these specifications for programs that have similar configurations (such a specification and plug-in for the ini file type.) Additionally, specifications, once defined in KDB can be used across multiple programs. For instance, if I were to define a specification for my program within KDB:
Any other program could use my specification just by referring to show_hidden_files. These features allow Elektra to solve the problem of cross-platform configurations by providing a consistent API and also allow users to easily be aware of other applications’ configurations which allows for easier integration between programs.
Config:: Model also moves to provide a unified interface for configuration data and it also supports validation such as the type=Boolean like in the above example. The biggest differences between these two projects is that Elektra is intended for use by the programs themselves and by external GUIs and validation tools unlike Config::Model. Config::Model provides a tool allowing developers to provide a means for users to interactively edit configuration data in a safe way. Additionally, Elektra uses self-inscribing data. This means that all the specifications are saved within KDB and in metadata. More differences are that validators can be written in any language for Elektra because the specifications are just stored as data and they can enforce constraints on any access because plug-ins define the behavior of KDB itself.
Tying this all together with my GSoC project is the topic of three-way merges. Config::Model actually does not rely on a base for merges since the specifications all must be complete. This is a very good approach to handle merges in an advanced way too. This is an avenue that Elektra would like to explore in the future when we have enough specifications to handle all types on configuration.
I hope that this post clarifies the different approaches of Elektra and Config::Model. While both of these tools offer a better answer to configuration files they do have different goals and implementations that make them unique. I want to mention that we have a good relationship with the developers of Config::Model, who supported my Google Summer of Code Project. We believe that both of these tools have their own place and uses and they do not compete to achieve the same goals.
Ian S. Donnelly
Back in 2009, I set up githubredir.debian.net, a service that allowed following using uscan the tags of a GitHub-based project.
Maybe a year or two later, GitHub added the needed bits in their interface, so it was no longer necessary to provide this service. Still, I kept it alive in order not to break things.
But as it is just a silly web scraper, every time something changes in GitHub, the redirector breaks. I decided today that, as it is no longer a very useful project, it should be retired.
So, in the not too distant future (I guess, next time anything breaks), I will remove it. Meanwhile, every page generated will display this:
(of course, with the corresponding project/author names in)
Consider yourselves informed.AttachmentSize githubredir.png37.68 KB
The MirBSD Korn Shell has got a new security and maintenance release.
This release fixes one mksh(1)-specific issue when importing values from the environment. The issue has been detected by the main developer during careful code review, looking at whether the shell is affected by the recent “shellshock” bugs in GNU bash, many of which also affect AT&T ksh93. (The answer is: no, none of these bugs affects mksh.) Stephane Chanzelas kindly provided me with an in-depth look at how this can be exploited. The issue has not got a CVE identifier because it was identified as low-risk. The problem here is that the environment import filter mistakenly accepted variables named “FOO+” (for any FOO), which are, by general environ(7) syntax, distinct from “FOO”, and treated them as appending to the value of “FOO”. An attacker who already had access to the environment could so append values to parameters passed through programs (including sudo(8) or setuid) to shell scripts, including indirectly, after those programs intended to sanitise the environment, e.g. invalidating the last $PATH component. It could also be used to circumvent sudo’s environment filter which protected against the vulnerability of an unpatched GNU bash being exploited.
tl;dr: mksh not affected by any shellshock bugs, but we found a bug of our own, with low impact, which does not affect any other shell, during careful code review. Please do update to mksh R50c quickly.
You are a software vendor. You distribute software on multiple operating systems. Let’s say your software is a mildly popular internet browser. Let’s say its logo represents an animal and a globe.
Now, because you care about the security of your users, let’s say you would like the entire address space of your application to be randomized, including the main executable portion of it. That would be neat, wouldn’t it? And there’s even a feature for that: Position independent executables.
You get that working on (almost) all the operating systems you distribute software on. Great.
Then a Gnome user (or an Ubuntu user, for that matter) comes, and tells you they downloaded your software tarball, unpacked it, and tried opening your software, but all they get is a dialog telling them:
Could not display “application-name”
There is no application installed for “shared library” files
Because, you see, a Position independent executable, in ELF terms, is actually a (position independent) shared library that happens to be executable, instead of being an executable that happens to be position independent.
And nautilus (the file manager in Gnome and Ubuntu’s Unity) usefully knows to distinguish between executables and shared libraries. And will happily refuse to execute shared libraries, even when they have the file-system-level executable bit set.
You’d think you can get around this by using a .desktop file, but the Exec field in those files requires a full path. (No, ./ doesn’t work unless the executable is in the nautilus process current working directory, as in, the path nautilus was run from)
Dear lazyweb, please prove me wrong and tell me there’s a way around this.
Kudos to Matthew for taking a stance. It has, not surprisingly, provoked a lot of comments and feedback, most of it unpleasant.
If I did anything that was directly related to Intel, I'd join him, but I do very, very little architecture dependent stuff anymore.
I will, however, say this: Even if the "gamergate" were actually about good journalism and ethics (and it's clear it isn't), if your reaction to a differing opinion is abuse, harrassment, and other kinds of psychological violence, you're not making anything better, you're making it all worse.
Reasonable people can handle disagreement without any kind of violence.
Exactly 15 years ago I uploaded to Debian the first release of my whois client.
At the end of 1999 the United States Government forced Network Solutions, at the time the only registrar for the .com, .net and .org top level domains, to split their functions in a registry and a registrar and to and allow competing registrars to operate.
Since then, two whois queries are needed to access the data for a domain in a TLD operating with a thin registry model: first one to the registry to find out which registrar was used to register the domain, and then one the registrar to actually get the data.
Being as lazy as I am I tought that this was unacceptable, so I implemented a whois client that would know which whois server to query for all TLDs and then automatically follow the referrals to the registrars.
But the initial reason for writing this program was to replace the simplistic BSD-derived whois client that was shipped with Debian with one that would know which server to query for IP addresses and autonomous system numbers, a useful feature in a time when people still used to manually report all their spam to the originating ISPs.
Over the years I have spent countless hours searching for the right servers for the domains of far away countries (something that has often been incredibly instructive) and now the program database is usually more up to date than the official IANA one.
One of my goals for this program has always been wide portability, so I am happy that over the years it was adopted by other Linux distributions, made available by third parties to all common variants of UNIX and even to systems as alien as Windows and OS/2.
Now that whois is 15 years old I am happy to announce that I have recently achieved complete world domination and that all Linux distributions use it as their default whois client.
For my 31st birthday I decided to build myself a computer, specifically a NAS and backup server which could do some other bits and pieces. I ended up buying a system based on the Gigabyte J1900N-D3V SoC from Mini-ITX (who's after sales support is great, by the way).
I hope to write up a more comprehensive overview of what I've ended up with (probably in my rather dusty hardware section), but in the meantime I have a question for anyone else with this board:
If you've upgraded the BIOS, do the more recent BIOS versions insist on there being a display connected in order to boot?
Sadly the V1 BIOS version does, which seriously limits the utility of this board for my purposes. I did manage to flash the board up to V3, once, but it later decided to downgrade itself (believing the flashed BIOS to be corrupt). I haven't managed a second time. The EFI implementation in this board is... interesting. Convincing it to boot anything legacy is a tricky task.
As an aside, I recently stumbled across this suggestion on reddit to use an old-ish, Core-era Thinkpad T-series with a dock for this exact purpose: the spare ultrabay gives you two SATA drive slots; the laptop battery serves as a crude UPS and there's a built in keyboard and mouse, avoiding the issue I'm having with the J1900N-D3V. A Core i5 is more than fast enough for what I want to do and it will have vt. Hindsight is a wonderful thing...
It's taken me a while to get sufficiently riled up about Australia's current Islamaphobia outbreak, but it's been brewing in me for a couple of weeks.
For the record, I'm an Atheist, but I'll defend your right to practise your religion, just don't go pushing it on me, thank you very much. I'm also not a huge fan of Islam, because it does seem to lend itself to more violent extremism than other religions, and ISIS/ISIL/IS (whatever you want to call them) aren't doing Islam any favours at the moment. I'm against extremism of any stripes though. The Westboro Baptists are Christian extremists. They just don't go around killing people. I'm also not a big fan of the burqa, but again, I'll defend a Muslim woman's right to choose to wear one. They key point here is choice.
I got my carpets cleaned yesterday by an ethnic couple. I like accents, and I was trying to pick theirs. I thought they may have been Turkish. It turned out they were Kurdish. Whenever I hear "Kurd" I habitually stick "Bosnian" in front of it after the Bosnian War that happened in my childhood. Turns out I wasn't listening properly, and that was actually "Serb". Now I feel dumb, but I digress.
I got chatting with the lady while her husband did the work. I got a refresher on where most Kurds are/were (Northern Iraq) and we talked about Sunni versus Shia Islam, and how they differed. I learned a bit yesterday, and I'll have to have a proper read of the Wikipedia article I just linked to, because I suspect I'll learn a lot more.
We briefly talked about burqas, and she said that because they were Sunni, they were given the choice, and they chose not to wear it. That's the sort of Islam that I support. I suspect a lot of the women running around in burqas don't get a lot of say in it, but I don't think banning it outright is the right solution to that. Those women need to feel empowered enough to be able to cast off their burqas if that's what they want to do.
I completely agree that a woman in a burqa entering a secure place (for example Parliament House) needs to be identifiable (assuming that identification is verified for all entrants to Parliament House). If it's not, and they're worried about security, that's what the metal detectors are for. I've been to Dubai. I've seen how they handle women in burqas at passport control. This is an easily solvable problem. You don't have to treat burqa-clad women as second class citizens and stick them in a glass box. Or exclude them entirely.