As always, even though I've not been posting much, I'm still buying books. This is a catch-up post listing a variety of random purchases.
Katherine Addison — The Goblin Emperor (sff)
Milton Davis — From Here to Timbuktu (sff)
Mark Forster — How to Make Your Dreams Come True (non-fiction)
Angela Highland — Valor of the Healer (sff)
Marko Kloos — Terms of Enlistment (sff)
Angela Korra'ti — Faerie Blood (sff)
Cixin Liu — The Three-Body Problem (sff)
Emily St. John Mandel — Station Eleven (sff)
Sydney Padua — The Thrilling Adventures of Lovelace and Babbage (graphic novel)
Melissa Scott & Jo Graham — The Order of the Air Omnibus (sff)
Andy Weir — The Martian (sff)
Huh, for some reason I thought I'd bought more than that.
I picked up the rest of the Hugo nominees that aren't part of a slate, and as it happens have already read all the non-slate nominees at the time of this writing (although I'm horribly behind on reviews). I also picked up the first book of Marko Kloos's series, since he did the right thing and withdrew from the Hugos once it became clear what nonsense was going on this year.
The rest is a pretty random variety of on-line recommendations, books by people who made sense on the Internet, and books by authors I like.
As the last step in the preparation of the TeX Live 2015 release we have now completely frozen updates to the repository and built the (hopefully) final image. THe following weeks will see more testing and preparation of the gold master for DVD preparation.
The last weeks were full of frantic rebuilds of binaries, in particular due to the always in the last minute found bugs in several engines. Also already after the closing time we found a serious problem with Windows installations in administrator mode, where unprivileged users didn’t have read access to the format dumps. This was due to the File::Temp usage on Windows which sets too restrictive ACLs.
Unless really serious bugs show up, further changes will have to wait till after the release. Let’s hope for some peaceful two weeks.
So, it is time to prepare for release parties Enjoy!
Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:
- For squeeze-lts (oldoldstable), you check out the secure-testing repository, run a script from that repository that generates a DLA number and email template, commit the result, and send a signed mail (whatever format) to the relevant mailinglist. Uploads go to ftp-master with squeeze-lts as target distribution.
- For backports, you send a mail to the team alias requesting a BSA number, do the upload, and write the mail (based on a template that you need to modify yourself), which you then send (inline signed) to the relevant mailinglist. Uploads go to ftp-master with $dist-backports as target distribution, but you need to be in a particular ACL to be allowed to do so. However, due to backports policy, packages should never be in backports before they are in the distribution from which they are derived -- so I refrained from uploading to backports until the regular security update had been done. Not sure whether that's strictly required, but I didn't think it would do harm; even so, that did mean the procedure for backports was even more involved.
- For the distributions supported by the security team (stable and oldstable, currently), you prepare the upload yourself, ask permission from the security team (by sending a debdiff), do the upload, and then ask the security team to send out the email. Uploads go to security-master, which implies that you may have to use dpkg-buildpackage's -sa parameter in order to make sure that the orig.tar.gz is actually in the security archive.
- For unstable and upstream, you Just Upload(TM), because it's no different from a regular release.
While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.
As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.
Long time without a blog post. My time got eaten by work and travel and work-related travel. Hopefully more content soon.
This is just a quick note about the release of version 1.34 of the git-pbuilder script (which at some point really should just be rewritten in Python and incorporated entirely into the git-buildpackage package). Guido Günther added support for creating chroots for LTS distributions.
You can get the latest version from my scripts page.
What do we want to achive:
- SFTP server
- only a specified account is allowed to connect to SFTP
- nothing outside the SFTP directory is exposed
- no SSH login is allowed
- any extra security measures are welcome
Mount the removable drive which will hold the SFTP area (you might need to add some entry in fstab).
Create the account to be used for SFTP access (on a Debian system this will do the trick):
# adduser --system --home /media/Store/sftp --shell /usr/sbin/nologin sftp
This will create the account sftp which has login disabled, shell is /usr/sbin/nologin and create the home directory for this user.
Unfortunately the default ownership of the home directory of this user are incompatible with chroot-ing in SFTP (which prevents access to other files on the server). A message like the one below will be generated in this kind of case:
$ sftp -v sftp@localhost
debug1: Authentication succeeded (password).
Authenticated to localhost ([::1]:22).
debug1: channel 0: new [client-session]
debug1: Requesting firstname.lastname@example.org
debug1: Entering interactive session.
Write failed: Broken pipe
Couldn't read packet: Connection reset by peerAlso /var/log/auth.log will contain something like this:
fatal: bad ownership or modes for chroot directory "/media/Store/sftp"
The default permissions are visible using the 'namei -l' command on the sftp home directory:
# namei -l /media/Store/sftp
drwxr-xr-x root root /
drwxr-xr-x root root media
drwxr-xr-x root root Store
drwxr-xr-x sftp nogroup sftpWe change the ownership of the sftp directory and make sure there is a place for files to be uploaded in the SFTP area:
# chown root:root /media/Store/sftp
# mkdir /media/Store/sftp/upload
# chown sftp /media/Store/sftp/upload
We isolate the sftp users from other users on the system and configure a chroot-ed environment for all users accessing the SFTP server:
# addgroup sftpusers
# adduser sftp sftusersSet a password for the sftp user so password authentication works:
# passwd sftpPutting all pieces together, we restrict access only to the sftp user, allow it access via password authentication only to SFTP, but not SSH (and disallow tunneling and forwarding or empty passwords).
Here are the changes done in /etc/ssh/sshd_config:
Subsystem sftp internal-sftp
Match Group sftpusers
PermitTunnel noReload the sshd configuration (I'm using systemd):
# systemctl reload ssh.serviceCheck sftp user can't login via SSH:
$ ssh sftp@localhost
This service allows sftp connections only.
Connection to localhost closed.But SFTP is working and is restricted to the SFTP area:
$ sftp sftp@localhost
Connected to localhost.
Remote working directory: /
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /netbsd-nfs.bin
remote open("/netbsd-nfs.bin"): Permission denied
sftp> cd upload
sftp> put netbsd-nfs.bin
Uploading netbsd-nfs.bin to /upload/netbsd-nfs.bin
netbsd-nfs.bin 100% 3111KB 3.0MB/s 00:00 Now your system is ready to accept sftp connections, things can be uploaded in the upload directory and whenever the external drive is unmounted, SFTP will NOT work.
Note: Since we added 'AllowUsers sftp', you can test no local user can login via SSH. If you don't want to restrict access only to the sftp user, you can whitelist other users by adding them in the AllowUsers directive, or dropping it entirely so all local users can SSH into the system.
DebConf team: Second Call for Proposals and Approved Talks for DebConf15 (Posted by DebConf Content Team)
DebConf15 will be held in Heidelberg, Germany from the 15th to the 22nd of August, 2015. The clock is ticking and our annual conference is approaching. There are less than three months to go, and the Call for Proposals period closes in only a few weeks.
This year, we are encouraging people to submit “half-length” 20-minute events, to allow attendees to have a broader view of the many things that go on in the project in the limited amount of time that we have.
To make sure that your proposal is part of the official DebConf schedule you should submit it before June 15th.
If you have already sent your proposal, please log in to summit and make sure to improve your description and title. This will help us fit the talks into tracks, and devise a cohesive schedule.
For more details on how to submit a proposal see: http://debconf15.debconf.org/proposals.xhtml.Approved Talks
We have processed the proposals submitted up to now, and we are proud to announce the first batch of approved talks. Some of them:
- This APT has Super Cow Powers (David Kalnischkies)
- AppStream, Limba, XdgApp: Past, present and future (Matthias Klumpp)
- Onwards to Stretch (and other items from the Release Team) (Niels Thykier for the Release Team)
- GnuPG in Debian report (Daniel Kahn Gillmor)
- Stretching out for trustworthy reproducible builds - creating bit by bit identical binaries (Holger Levsen & Lunar)
- Debian sysadmin (and infrastructure) from an outsider/newcomer perspective (Donald Norwood)
- The Debian Long Term Support Team: Past, Present and Future (Raphaël Hertzog & Holger Levsen)
If you have already submitted your event and haven’t heard from us yet, don’t panic! We will contact you shortly.
We would really like to hear about new ideas, teams and projects related to Debian, so do not hesitate to submit yours.
See you in Heidelberg,
apt-get install memtest86+ smartmontools e2fsprogs
Prior to spending any time configuring a new physical server, I like to ensure that the hardware is fine.
To check memory, I boot into memtest86+ from the grub menu and let it run overnight.
Then I check the hard drives using:
smartctl -t long /dev/sdX badblocks -swo badblocks.out /dev/sdXConfiguration
apt-get install etckeepr git sudo vim
To keep track of the configuration changes I make in /etc/, I use etckeeper to keep that directory in a git repository and make the following changes to the default /etc/etckeeper/etckeeper.conf:
- turn off daily auto-commits
- turn off auto-commits before package installs
To get more control over the various packages I install, I change the default debconf level to medium:
Since I use vim for all of my configuration file editing, I make it the default editor:
update-alternatives --config editorssh
apt-get install openssh-server mosh fail2ban
Since most of my servers are set to UTC time, I like to use my local timezone when sshing into them. Looking at file timestamps is much less confusing that way.
I also ensure that the locale I use is available on the server by adding it the list of generated locales:
Other than that, I harden the ssh configuration and end up with the following settings in /etc/ssh/sshd_config (jessie):
HostKey /etc/ssh/ssh_host_ed25519_key HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms email@example.com,ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers firstname.lastname@example.org,aes256-ctr,aes192-ctr,aes128-ctr MACs email@example.com,firstname.lastname@example.org,email@example.com,hmac-sha2-512,hmac-sha2-256,firstname.lastname@example.org UsePrivilegeSeparation sandbox AuthenticationMethods publickey PasswordAuthentication no PermitRootLogin no AcceptEnv LANG LC_* TZ LogLevel VERBOSE AllowGroups sshuser
or the following for wheezy servers:
HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_ecdsa_key KexAlgorithms ecdh-sha2-nistp521,ecdh-sha2-nistp384,ecdh-sha2-nistp256,diffie-hellman-group-exchange-sha256 Ciphers aes256-ctr,aes192-ctr,aes128-ctr MACs hmac-sha2-512,hmac-sha2-256
On those servers where I need duplicity/paramiko to work, I also add the following:
KexAlgorithms ...,diffie-hellman-group-exchange-sha1 MACs ...,hmac-sha1
Then I remove the "Accepted" filter in /etc/logcheck/ignore.d.server/ssh (first line) to get a notification whenever anybody successfully logs into my server.
I also create a new group and add the users that need ssh access to it:
addgroup sshuser adduser francois sshuser
and add a timeout for root sessions by putting this in /root/.bash_profile:
apt-get install logcheck logcheck-database fcheck tiger debsums apt-get remove john john-data rpcbind tripwire
Logcheck is the main tool I use to keep an eye on log files, which is why I add a few additional log files to the default list in /etc/logcheck/logcheck.logfiles:
/var/log/apache2/error.log /var/log/mail.err /var/log/mail.warn /var/log/mail.info /var/log/fail2ban.log
while ensuring that the apache logfiles are readable by logcheck:
chmod a+rx /var/log/apache2 chmod a+r /var/log/apache2/*
and fixing the log rotation configuration by adding the following to /etc/logrotate.d/apache2:
create 644 root adm
I also modify the main logcheck configuration file (/etc/logcheck/logcheck.conf):
Other than that, I enable daily checks in /etc/default/debsums and customize a few tiger settings in /etc/tiger/tigerrc:
Tiger_Check_RUNPROC=Y Tiger_Check_DELETED=Y Tiger_Check_APACHE=Y Tiger_FSScan_WDIR=Y Tiger_SSH_Protocol='2' Tiger_Passwd_Hashes='sha512' Tiger_Running_Procs='rsyslogd cron atd /usr/sbin/apache2 postgres' Tiger_Listening_ValidProcs='sshd|mosh-server|ntpd'General hardening
apt-get install harden-clients harden-environment harden-servers apparmor apparmor-profiles apparmor-profiles-extra
While the harden packages are configuration-free, AppArmor must be manually enabled:
perl -pi -e 's,GRUB_CMDLINE_LINUX="(.*)"$,GRUB_CMDLINE_LINUX="$1 apparmor=1 security=apparmor",' /etc/default/grub update-grubEntropy and timekeeping
apt-get install haveged rng-tools ntp
To keep the system clock accurate and increase the amount of entropy available to the server, I install the above packages and add the tpm_rng module to /etc/modules.Preventing mistakes
apt-get install molly-guard safe-rm sl
apt-get install apticron unattended-upgrades deborphan debfoster apt-listchanges update-notifier-common aptitude popularity-contest
These tools help me keep packages up to date and remove unnecessary or obsolete packages from servers. On Rackspace servers, a small configuration change is needed to automatically update the monitoring tools.
In addition to this, I use the update-notifier-common package along with the following cronjob in /etc/cron.daily/reboot-required:
#!/bin/sh cat /var/run/reboot-required 2> /dev/null || true
to send me a notification whenever a kernel update requires a reboot to take effect.Handy utilities
apt-get install renameutils atool iotop sysstat lsof mtr-tiny
Most of these tools are configure-free, except for sysstat, which requires enabling data collection in /etc/default/sysstat to be useful.Apache configuration
apt-get install apache2-mpm-event
While configuring apache is often specific to each server and the services that will be running on it, there are a few common changes I make.
I enable these in /etc/apache2/conf.d/security:
<Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> ServerTokens Prod ServerSignature Off
and remove cgi-bin directives from /etc/apache2/sites-enabled/000-default.
I also create a new /etc/apache2/conf.d/servername which contains:
apt-get install postfix
Configuring mail properly is tricky but the following has worked for me.
In /etc/hostname, put the bare hostname (no domain), but in /etc/mailname put the fully qualified hostname.
Change the following in /etc/postfix/main.cf:
inet_interfaces = loopback-only myhostname = (fully qualified hostname) smtp_tls_security_level = may smtp_tls_protocols = !SSLv2, !SSLv3
Set the following aliases in /etc/aliases:
- set francois as the destination of root emails
- set an external email address for francois
- set root as the destination for www-data emails
before running newaliases to update the aliases database.
Create a new cronjob (/etc/cron.hourly/checkmail):
#!/bin/sh ls /var/mail
to ensure that email doesn't accumulate unmonitored on this box.
Finally, set reverse DNS for the server's IPv4 and IPv6 addresses and then test the whole setup using mail root.Network tuning
To reduce the server's contribution to bufferbloat I change the default kernel queueing discipline by putting the following in /etc/sysctl.conf:
Weblate 2.3 has been released today. It comes with better features for project owners, better file formats support and more configuration options for users.
Full list of changes for 2.3:
- Dropped support for Django 1.6 and South migrations.
- Support for adding new translations when using Java Property files
- Allow to accept suggestion without editing.
- Improved support for Google OAuth2.
- Added support for Microsoft .resx files.
- Tuned default robots.txt to disallow big crawling of translations.
- Simplified workflow for accepting suggestions.
- Added project owners who always receive important notifications.
- Allow to disable editing of monolingual template.
- More detailed repository status view.
- Direct link for editing template when changing translation.
- Allow to add more permissions to project owners.
- Allow to show secondary language in zen mode.
- Support for hiding source string in favor of secondary language.
You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user.
If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.
Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!
A brief summary of changes from the NEWS file is below.Changes in version 1.58.0-1 (2015-05-21)
Upgraded to Boost 1.58 installed directly from upstream source
Added Boost MultiPrecision as requested in GH ticket #12 based on rcpp-devel request by Jordi Molins Coronado
So, following the previous post, I've indeed updated the way I'm making my grsec kernels.
I wanted to upgrade my server to Jessie, and didn't want to keep the 3.2 kernel indefinitely, so I had to update to at least 3.14, and find something to make my life (and maybe some others) easier.
In the end, like planned, I've switched to the make deb-pkg way, using some scripts here and there to simplify stuff.
The scripts and configs can be found in my debian-grsec-config repository. The repository layout is pretty much self-explaining:
The bin/ folder contains two scripts:
- get-grsec.sh, which will pick the latest grsec patch (for each branch) and applies it to the correct Linux branch. This script should be run from a git clone of the linux-stable git repository;
- kconfig.py is taken from the src:linux Debian package, and can be used to merge multiple KConfig files
The configs/ folder contains the various configuration bits:
- config-* files are the Debian configuration files, taken from the linux-image binary packages (for amd64 and i386);
- grsec* are the grsecurity specifics bits (obviously);
- hardening* contain non-grsec stuff still useful for hardened kernels, for example KASLR (cargo-culting nonwidthstanding) or strong SSP (available since I'm building the kernels on a sid box, YMMV).
I'm currently building amd64 kernels for Jessie and i386 kernels will follow soon, using config-3.14 + hardening + grsec. I'm hosting them on my apt repository. You're obviously free to use them, but considering how easy it is to rebuild a kernel, you might want to use a personal configuration (instead of mine) and rebuild the kernel yourself, so you don't have to trust my binary packages.
Here's a very quick howto (adapt it to your needs):
mkdir linux-grsec && cd linux-grsec git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git git clone git://anonscm.debian.org/users/corsac/grsec/debian-grsec-config.git mkdir build cd linux-stable ../debian-grsec-config/bin/get-grsec.sh stable2 # for 3.14 branch ../debian-grsec-config/bin/kconfig.py ../build/.config ../debian-grsec-config/configs/config-3.14-2-amd64 ../debian-grsec-config/configs/hardening ../debian-grsec-config/configs/grsec make KBUILD_OUTPUT=../build -j4 oldconfig make KBUILD_OUTPUT=../build -j4 deb-pkg
Then you can use the generated Debian binary packages. If you use the Debian config, it'll need a lot of disk space for compilation and generate a huge linux-image debug package, so you might want to unset CONFIG_DEBUG_INFO locally if you're not interested. Right now only the deb files are generated but I've submitted a patch to have a .changes file which can be then used to manipulate them more easily (for example for uploading them a local Debian repository).
Note that, obviously, this is not targeted for inclusion to the official Debian archive. This is still not possible for various reasons explained here and there, and I still don't have a solution for that.
I hope this (either the scripts and config or the generated binary packages) can be useful. Don't hesitate to drop me a mail if needed.
As I slowly upgrade all my machines to Debian 8.0 (jessie) they’re all ending up with systemd. That’s fine; my laptop has been running it since it went into testing whenever it was. Mostly I haven’t had to care, but I’m dimly aware that it has a lot of bits I should learn about to make best use of it.
Today I discovered systemctl is-system-running. Which I’m not sure why I’d use it, but when I ran it it responded with degraded. That’s not right, thought I. How do I figure out what’s wrong? systemctl --state=failed turned out to be the answer.
# systemctl --state=failed UNIT LOAD ACTIVE SUB DESCRIPTION ● systemd-modules-load.service loaded failed failed Load Kernel Modules LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.
Ok, so it’s failed to load some kernel modules. What’s it trying to load? systemctl status -l systemd-modules-load.service led me to /lib/systemd/systemd-modules-load which complained about various printer modules not being able to be loaded. Turned out this was because CUPS had dropped them into /etc/modules-load.d/cups-filters.conf on upgrade, and as I don’t have a parallel printer I hadn’t compiled up those modules. One of my other machines had also had an issue with starting up filesystem quotas (I think because there’d been some filesystems that hadn’t mounted properly on boot - my fault rather than systemd). Fixed that up and then systemctl is-system-running started returning a nice clean running.
Now this is probably something that was silently failing back under sysvinit, but of course nothing was tracking that other than some output on boot up. So I feel that I’ve learnt something minor about systemd that actually helped me cleanup my system, and sets me in better stead for when something important fails.
Few days ago, I've started writing Odorik module to manipulate with API of one Czech mobile network operator. As usual, the code comes with documentation written in English. Given that vast majority of users are Czech, it sounds useful to have in Czech language as well.
First step is to add necessary configuration to the Sphinx project as described in their Internationalization Quick Guide. It's matter of few configuration directives and invoking of sphinx-intl and the result can be like this commit.
Once the code in repository is ready, you can start building translated documentation on the Read the docs. There is nice guide for that as well. All you need to do is to create another project, set it's language and link it from master project as translation.
The last step is to find some translators to actually translate the document. For me the obvious choice was using Weblate, so the translation is now on Hosted Weblate. The mass import of several po files can be done by import_project management command.
And thanks to all these you can now read Czech documentation for python Odorik module.
A new release 0.2.13 of RInside is now on CRAN. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.
This release works around a bug in R 3.2.0, and addressed in R 3.2.0-patched. The NEWS extract below has more details.Changes in RInside version 0.2.13 (2015-05-20)
Added workaround for a bug in R 3.2.0: by including the file RInterface.h only once we do not getting linker errors due to multiple definitions of R_running_as_main_program (which is now addressed in R-patched as well).
Small improvements to the Travis CI script.
CRANberries also provides a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.
- restructuring website as 21st century style and drop deprecated info
- automate test integration to infrastructure: piuparts & ci
- more test for packages: adopt autopkgtest
- "go/no-go vote" for migration to next release candidate (e.g. bohdi in Fedora)
- more comfortable collab-development (see GitHub's pull request style, not mail-centric)
This morning, I pushed out version 2.11.17 of the Geode X.Org driver. This is the driver used by the OLPC XO-1 and by a plethora of low-power desktops, micro notebooks and thin clients. This is a minor release. It merges conditional support for the OpenBSD MSR device (Marc Ballmer, Matthieu Herrb), fixes a condition that prevents compiling on some embedded platforms (Brian A. Lloyd) and upgrades the code for X server 1.17 compatibility (Maarten Lankhorst).
- toggle COM2 into DDC probing mode during driver initialization
- reset the DAC chip when exiting X and returning to vcons
- fix a rendering corner case with Libre Office
‘Love thy neighbor as thyself’, words which astoundingly occur already in the Old Testament.
One can love one’s neighbor less than one loves oneself; one is then the egoist, the racketeer, the capitalist, the bourgeois. and although one may accumulate money and power one does not of necessity have a joyful heart, and the best and most attractive pleasures of the soul are blocked.
Or one can love one’s neighbor more than oneself—then one is a poor devil, full of inferiority complexes, with a longing to love everything and still full of hate and torment towards oneself, living in a hell of which one lays the fire every day anew.
But the equilibrium of love, the capacity to love without being indebted to anyone, is the love of oneself which is not taken away from any other, this love of one’s neighbor which does no harm to the self.
I always have a hard time finding this quote on the Internet. Let's fix that.
I wrote well over one year ago about Earthlings. It really did have some impact on my life. Nowadays I try to avoid animal products where possible, especially for my food. And in the context of vegan information that I tracked I stumbled upon a great band from Germany: Berge. They recently started a deal with their record label which says that if they receive one million clicks within the next two weeks on their song 10.000 Tränen their record label is going to donate 10.000,- euros to a German animal rights organization. Reason enough for me to share this band with you! :)
- 10.000 Tränen: This is the song that needs the views. It's a nice tune and great lyrics to think about. Even though its in German it got English subtitles. :)
- Schauen was passiert: In the light of 10.000 Tränen it was hard for me to select other songs, but this one sounds nice. "Let's see what happens". :)
- Meer aus Farben: I love colors. And I hate the fact that most conference shirts are black only. Or that it seems to be impossible to find colorful cloths and shoes for tall women.
Like always, enjoy!
Eddy Petrișor: Linksys NSLU2 adventures into the NetBSD land passed through JTAG highlands - part 2 - RedBoot reverse engineering and APEX hacking
Choosing to call RedBoot from a hacked Apex
As I was saying in my previous post, in order to be able to automate the booting of the NetBSD image via TFTP I opted for using a 2nd stage bootloader (planning to flash it in the NSLU2 instead of a Linux kernel), and since Debian was already using Apex, I chose Apex, too.
The first problem I found was that the networking support in Apex was relying on an old version of the Intel NPE library which I couldn't find on Intel's site, the new version was incompatible/not building with the old build wrapper in Apex, so I was faced with 3 options:
- Fight with the availabel Intel code and try to force it to compile in Apex
- Incorporate the NPE driver from NetBSD into a rump kernel to be included in Apex instead of the original Intel code, since the NetBSD driver only needed an easily compilable binary blob
- Hack together an Apex version that simulates the typing necessary RedBoot commands to load via TFTP the netbsd image and execute it.
Option 2 looked like the best option I could have, given the situation, but my NetBSD foo was too close to 0 to even dream to endeavor on such a task. In my evaluation, this still remains the technically superior solution to the problem since is very portable, and flexible way to ensure networking works in spite of the proprietary NPE code.
But, in practice, the best option I could implement at the time was option 3. I initially planned to pre-fill from Apex the RedBoot buffer that the stored the keyboard strokes with my desired commands:
load -r -b 0x200000 -h 192.168.0.2 netbsd-nfs.bin
gSince this was the first time ever for me I was going to do less than trivial reverse engineering in order to find the addresses and signatures of interesting functions in the RedBoot code, it wasn't bad at all that I had a version of the RedBoot source code.
When stuck with reverse engineering, apply JTAG
The bad thing was that the code Linksys published as the source of the RedBoot running inside the NSLU2 was, in fact, a different code which had some significant changes around the code pieces I was mostly interested in. That in spite of the GPL terms.
But I thought that I could manage in spite of that. After all, how hard could it be to identify the 2-3 functions I was interested in and 1 buffer? Even if I only had the disassembled code from the slug, I shouldn't be that hard.
I struggled with this for about 2-3 weeks on the few occasions I had during that time, but the excitement of leaning something new kept me going. Until I got stuck somewhere between the misalignment between the published RedBoot code and the disassembled code, the state of the system at the time of dumping the contents from RAM (for purposes of disassemby), the assembly code generated by GCC for some specific C code I didn't have at all, and the particularities of ARM assembly.
What was most likely to unblock me was to actually see the code in action, so I decided attaching a JTAG dongle to the slug and do a session of in-circuit-debugging was in order.
Luckily, the pinout of the JTAG interface was already identified in the NSLU2 Linux project, so I only had to solder some wires to the specified places and a 2x20 header to be able to connect through JTAG to the board.
JTAG connections on Kinder (the NSLU2 targeting NetBSD)
After this was done I tried immediately to see if when using a JTAG debugger I could break the execution of the code on the system. The answer was sadly, no.
The chip was identified, but breaking the execution was not happening. I tried this in OpenOCD and in another proprietary debugger application I had access to, and the result was the same, breaking was not happening.
$ openocd -f interface/ftdi/olimex-arm-usb-ocd.cfg -f board/linksys_nslu2.cfgOpen On-Chip Debugger 0.8.0 (2015-04-14-09:12)Licensed under GNU GPL v2For bug reports, read http://openocd.sourceforge.net/doc/doxygen/bugs.htmlInfo : only one transport option; autoselect 'jtag'adapter speed: 300 kHzInfo : ixp42x.cpu: hardware has 2 breakpoints and 2 watchpoints0Info : clock speed 300 kHzInfo : JTAG tap: ixp42x.cpu tap/device found: 0x29277013 (mfg: 0x009,part: 0x9277, ver: 0x2)[..]
$ telnet localhost 4444Trying ::1...Trying 127.0.0.1...Connected to localhost.Escape character is '^]'.Open On-Chip Debugger> halttarget was in unknown state when halt was requestedin procedure 'halt'> pollbackground polling: onTAP: ixp42x.cpu (enabled)target state: unknown Looking into the documentation I found a bit of information on the XScale processors[X] which suggested that XScale processors might necessarily need the (otherwise optional) SRST signal on the JTAG inteface to be able to single step the chip.
This confused me a lot since I was sure other people had already used JTAG on the NSLU2.
The options I saw at the time were:
- my NSLU2 did have a fully working JTAG interface (either due to the missing SRST signal on the interface or maybe due to a JTAG lock on later generation NSLU-s, as was my second slug)
- nobody ever single stepped the slug using OpenOCD or other JTAG debugger, they only reflashed, and I was on totally new ground
This confused me even further because, from what I encountered on other platforms, in order to flash some device, the code responsible for programming the flash is loaded in the RAM of the target microcontroller and that code is executed on the target after a RAM buffer with the to be flashed data is preloaded via JTAG, then the operation is repeated for all flash blocks to be reprogrammed.
I was aware it was possible to program a flash chip situated on the board, outside the chip, by only playing with the chip's pads, strictly via JTAG, but I was still hoping single stepping the execution of the code in RedBoot was possible.
Guided by that hope and the possibility the newer versions of the device to be locked, I decided to add a JTAG interface to my older NSLU2, too. But this time I decided I would also add the TRST and SRST signals to the JTAG interface, just in case single stepping would work.
This mod involved even more extensive changes than the ones done on the other NSLU, but I was so frustrated by the fact I was stuck that I didn't mind poking a few holes through the case and the prospect of a connector always sticking out from the other NSLU2 which was doing some small, yet useful work in my home LAN.
It turns out NOBODY single stepped the NSLU2 After biting the bullet and soldering JTAG interface with also the TRST and the SRST signals connected as the pinout page from the NSLU2 Linux wiki suggested, I was disappointed to observe that I was not able to single step the older either, in spite of the presence of the extra signals.
I even tinkered with the reset configurations of OpenOCD, but had not success. After obtaining the same result on the proprietary debugger and digging through a presentation made by Rod back in the hay day of the project and the conversations on the NSLU2 Linux Yahoo mailing list I finally concluded:
Actually nobody single stepped the NSLU2, no matter the version of the NSLU2 or connections available on the JTAG interface!So I was back to square 1, I had to either struggle with disassembly, reevaluate my inital options, find another option or even drop entirely the idea. Since I was already committed to the project dropping entirely the idea didn't seem like the reasonable thing to do.
Since I was feeling I was really close to finish on the route I chose a while ago, was not any significantly more knowledgeable in the NetBSD code and looking at the NPE code made me feel like washing my hands, the only option I saw was to go on.
Digging a lot more through the internet, I was finally able to find another version of the RedBoot source which was modified for Intel ixp42x systems. A few checks here and there revealed this newly found code was actually almost identical to the code I had disassembled from the slug I was aiming to run NetBSD on.
Long story short, a couple of days later I had a hacked Apex that could go through the RedBoot data structures, search for available commands in RedBoot and successfully call any of the built-in RedBoot commands!
Testing with loading this modified Apex by hand in RAM via TFTP then jumping into it to see if things woked as expected revealed a few small issues which I corrected right away.
Flashing a modified RedBoot?! But why? Wasn't Apex supposed to avoid exactly that risky operation?
Since the tests when executing from RAM were successful, my custom second stage Apex bootloader for NetBSD net booting was ready to be flashed into the NSLU2.
I added two more targets in the Makefile in the code on the dedicated netbsd branch of my Apex repository to generate the images ready for flashing into the NSLU2 flash (RedBoot needs to find a Sercomm header in flash, otherwise it will crash) and the exact commands to be executed in RedBoot are also print out after generation. This way, if the command is copy-pasted, there is no risk the NSLU2 is bricked by mistake.
After some flashing and reflashing of the apex_nslu2.flash image into the NSLU2 flash, some manual testing, tweaking and modifying the default built in APEX commands, checking that the sequence of commands 'move', 'go 0x01d00000' would jump into Apex, which, in turn, would call RedBoot to transfer the netbsd-nfs.bin image from a TFTP to RAM and then execute it successfully, it was high time to check NetBSD would boot automatically after the NSLU is powered on.
It didn't. Contrary to my previous tests, no call made from Apexto the RedBoot code would return back to Apex, not even a basic execution of the 'version' command.
It turns out the default commands hardcoded into RedBoot were 'boot; exec 0x01d00000', but I had tested 'boot; go 0x01d0000', which is not the same thing.
While 'go' does a plain jump at the specified address, the 'exec' command also does some preparations so it allows a jump into the Linux kernel and those preparations break some environment the RedBoot commands expect.
So the easiest solution was to change the RedBoot's built-in command and turn that 'exec' into a 'go'. But that meant this time I was actually risking to brick the NSLU, unless I
was able to reflash via JTAG the NSLU2.
(to be continued - next, changing RedBoot and bisecting through the NetBSD history)
[X] Linksys NSLU2 has an XScale IXP420 processor which is compatible at ASM level with the ARMv5TEJ instruction set
The other day I received from my Japanese teacher an interesting article by Yamazaki Masakazu 山崎正和 comparing garden styles, and in particular the attitude towards and presentation of water in Japanese and European gardens (page 1, page 2). The author’s list of achievements is long, various professorships, dramatist, literature critique, recognized as Person of Cultural Merit, just to name a view. I was looking forward to an interesting and high quality article!
The article itself introduces the reader to 鹿おどし Shishiodoshi, one of the standard ingredients of a Japanese garden: It is a device where water drips into a bamboo tube that is also a seesaw. At some point the water in the bamboo tube makes the seesaw switch over and the water pours out, after which the seesaw returns to the original position and a new cycle begins.
The author describes his feelings and thoughts about the shishiodoshi, in particular connects human life (stress and relieve, cycles), the flow of time, and some other concepts with the shishiodoshi. Up to here it is a wonderful article providing interesting insights into the way the author thinks. Unfortunately, then the author tries to underline his ideas by comparing the Japanese shishiodoshi with European style water fountains, describing the former with all favorable properties and full of deep meaning, while the latter is qualified as beautiful and nice, but bare of any deeper meaning.
I don’t go into details that the whole comparison is anyway a bad one, as he is comparing Baroque style fountains, a very limited period, and furthermore ignores the fact that water fountains are not genuinely European (isn’t there one in the Kenrokuen, one of the three most famous gardens in Japan!?), nor does he consider any other “water-installation” that might be available. What really destroys the in principle very nice article is the tone:
The general tone of the article then can be summarized into: “The shishiodoshi is rich on meaning, connects to the life of humans, instigates philosophical reflections, represents nature, the flow of time etc. The water fountain is beautiful and gorgeous, but that is all.”
I don’t think that this separation, or this negative undertone, was created on purpose by the author. A person of his stature is supposedly above this level of primitive comparison. I believe that it is nothing else but a consequence of upbringing and the general attitude that permeates the whole society with this feeling of separateness.Us and They
Repeatedly providing sentences like “Japanese people and Western people have different tastes..” (日本人は西洋人と違った独特の好みを持っていたのである). About 10 times in this short article expressions like “Japanese” and “Westerner” appear, leaving the reader with a bitter taste of an author that considers first the Japanese a people (what about Ainu, Ryukyu, etc?), and second that the Japanese are exclusive in the sense that they are set apart from the rest of the world in their way of thinking, living, being.
What puzzles me is that this is not only a singular opinion, but a very general straight in the Japanese media, TV, radio, newspaper, books. Everyone considers “Japan” and “Japanese” as something that is fundamentally and profoundly different from everyone else in the world.
There is “We – the Japanese” (and that doesn’t mean nationality of passport, but blood line!), and there are “They – the Rest” or, as the way of writing and and description on many occasion suggestions, “They – the Barbarians”.
A short anecdote will underly this: One of the favorite TV talk show / pseudo-documentary style is about Japanese living abroad. That day a lady married in Paris was interviewed. What followed was a guided tour through Paris showing: Dirt in the gutter, street cleaning cars, waste disposal places. Yes, that was all. Just about the “dirt”. Of course, at length the (unfortunately only apparent) cleanliness of Japanese cities and neighborhoods are mentioned and shown to remind everyone how wonderful Japan is and how dirty the Barbarians. I don’t want to say that I consider Japan more dirty than most other countries – just the visible part is clean, the moment you step a bit aside and around the corner, there are the worst trash just thrown away without consideration. Anyway.
To return to the topic of “Us and They” – I consider us all humans, first and foremost, and nationality, birthplace, and all that are just by chance. I do NOT reject cultural differences, they are here, of course. But cultural differences are one thing, separating one self and one’s perceived people from the rest of the world is another.Conclusion
I repeat, I don’t think that the author had any ill intentions, but it would have been nicer if the article wouldn’t make such a stark distinction. He could have written about Shishiodoshi and water fountains without using the “Us – They” categorization. He could have compared other water installations, could have discussed the long tradition of small ponds in European gardens, just to name a few things. But the author choose to highlight differences instead of commonalities.
It is the “Us against Them” feeling that often makes life in Japan for a foreigner difficult. Japanese are not special, Austrians, too, are not special, nor are Americans, Russians, Tibetans, or any other nationality. No nationality is special, we are all humans. Maybe at some point this will arrive also in the Japanese society and thinking.
Today I feel more special than I have ever felt.
Or... Well, or something like that.
Thing is, there is no clear adjective for this — But I successfully finished my Specialization degree! Yes, believe it or not, today I can formally say I am Specialist in Informatic Security and Information Technologies (Especialista en Seguridad Informática y Tecnologías de la Información), as awarded by the Higher School of Electric and Mechanic Engineering (Escuela Superior de Ingeniería Mecánica y Eléctrica) of the National Polytechnical Institute (Instituto Politécnico Nacional).
In Mexico and most Latin American countries, degrees are usually incorporated to your name as if they were a nobiliary title. Thus, when graduating from Engineering studies (pre-graduate universitary level), I became "Ingeniero Gunnar Wolf". People graduating from further postgraduate programs get to introduce themselves as "Maestro Foobar Baz" or "Doctor Quux Noox". And yes, a Specialization is a small posgraduate program (I often say, the smallest possible posgraduate). And as a Specialist... What can I brag about? Can say I am Specially Gunnar Wolf? Or Special Gunnar Wolf? Nope. The honorific title for a Specialization is a pointer to null, and when casted into a char* it might corrupt your honor-recognizing function. So I'm still Ingeniero Gunnar Wolf, for information security reasons.
So that's the reason I am now enrolled in the Masters program. I hope to write an addenda to this message soonish (where soonish ≥ 18 months) saying I'm finally a Maestro.
As a sidenote, many people asked me: Why did I take on the specialization, which is a degree too small for most kinds of real work recognition? Because it's been around twenty years since I last attended a long-term scholar program as a student. And my dish is quite full with activities and responsabilities. I decided to take a short program, designed for 12 months (I graduated in 16, minus two months that the university was on strike... Quite good, I'd say ;-) ) to see how I fared on it, and only later jumping on the full version.
Because, yes, to advance my career at the university, I finally recognized and understood that I do need postgraduate studies.
Oh, and what kind of work did I do for this? Besides the classes I took, I wrote a thesis on a model for evaluating covert channels for establishing secure communications.