Planet Debian

Subscribe to Planet Debian feed
Planet Debian - http://planet.debian.org/
Updated: 1 hour 24 min ago

Joey Hess: twenty years of free software -- part 13 past and future

12 July, 2016 - 07:07

This series has focused on new projects. I could have said more about significant work that didn't involve starting new projects. A big example was when I added dh to debhelper, which led to changes in a large percentage of debian/rules files. And I've contributed to many other free software projects.

I guess I've focused on new projects becuase it's easier to remember things I've started myself. And because a new project is more wide open, with more scope for interesting ideas, especially when it's free software being created just because I want to, with no expectations of success.

But starting lots of your own projects also makes you responsible for maintaining a lot of stuff. Ten years ago I had dozens of projects that I'd started and was still maintaining. Over time I started pulling away from Debian, with active projects increasingly not connected with it. By the end, I'd left and stopped working on the old projects. Nearly everything from my first decade in free software was passed on to new maintainers. It's good to get out from under old projects and concentrate on new things.

I saved propellor for last in this series, because I think it may point toward the future of my software. While git-annex was the project that taught me Haskell, propellor's design is much more deeply influenced by the Haskell viewpoint.

Absorbing that viewpoint has itself been a big undertaking for me this decade. It's like I was coasting along feeling at the top of my game and wham got hit by the type theory bus. And now I see that I was stuck in a rut before, and begin to get a feeling of many new possibilities.

That's a good feeling to have, twenty years in.

Niels Thykier: mips64el added to Debian testing

12 July, 2016 - 02:09

Today, we have completed our first Britney run with mips64el enabled in testing. 

At the current time, the set of packages in mips64el are not very connected (and you probably cannot even install build-essential yet[1]). Hopefully this will change over the next few days. For now, Britney does not enforce installability of packages on mips64el in general, so do not expect the architecture to be stable at the moment.

Cheat sheet for package maintainers:

  • Issues with builds (only) on mips64el are not blockers for testing migration (nor RC yet).
    • Such bugs should be filed as “important” for now (unless they also affect a release architecture, in which case you should still make it an RC bug)
  • Your package be an older version on mips64el compared to other architectures.
  • Britney may decide to break your package on mips64el if it means something else can migrate on a release architecture.

We will slowly remove these special cases for mips64el as it matures in testing.

 

[1]  Update on this: mips64el currently does not have a libc library yet, so build-essential is definitely not installable at the moment.  It will hopefully migrate very soon.


Filed under: Debian, Release-Team

Jonathan Dowland: iPod

11 July, 2016 - 23:26

iPod with rockbox

open iPod with iFlash

It's been four years since I last wrote about music players. In the meantime ⅔ of my Sansa Fuzes broke, and the third does not have great rockbox support. I've also been using a Sansa Clip+ (a leaving present from my last job, thanks again!) and a Sansa Clip Zip. Unfortunately Sandisk's newer Sansa devices (Sport, Jam - the only ones still in production) are not supported by Rockbox.

The Clips have been very reliable and sturdy players, but I have missed the larger display of the Fuze. Since I've been exploring HD audio I've also been interested in something with an A/D converter that can handle it. I also still wish to carry my entire music library around with me, which limits my options.

I decided to try an iPod. The older iPods had a Wolfson-manufacturered ADC which had a good reputation and supported (in headline terms at least) 24/48. The iPod colour (aka "4th gen") and subsequent models have a large colour display. The click-wheel interface is also very nice. Apple have now discontinued the "classic" iPod and their second hand value has greatly increased, but I managed to get an older 5th generation model ("video", with a base capacity of 30G) whilst trading in some unwanted DVDs. The case was scratched to heck but a replacement was readily and cheaply available from auction sites.

Rockbox support in these iPods is pretty good, and you can mod the hardware to support CF or SD cards with kits such as the iFlash kits by Tarkan Akdam, which I picked up, along with a new 128G SD card.

Unfortunately I have found writing to the iPod to be very poor with Rockbox, but it's fine for playback, and booting the iPod in OF or DFU mode is very easy and works reliably.

Whilst Rockbox on the iPod works pretty well, installing it is far harder than on the Sandisk Sansa devices. The difficulty in my case is because rockbox requires a PC-formatted iPod to install, and I had a Mac-formatted one. I couldn't find a way to convert the iPod to PC format using a Mac. I tried doing so on a PC but for some reason the PC wasn't playing ball so I gave up after a few hours. In the end I assembled the filesystem by hand using dd(1) and dumps of partition tables from other people's iPods, via a Linux machine. This was enough to convince iTunes on Mac to restore it's hidden partition and boot software without reverting back to a Mac disklabel format.

Daniel Pocock: Let's Encrypt torpedoes cost and maintenance issues for Free RTC

11 July, 2016 - 20:34

Many people have now heard of the EFF-backed free certificate authority Let's Encrypt. Not only is it free of charge, it has also introduced a fully automated mechanism for certificate renewals, eliminating a tedious chore that has imposed upon busy sysadmins everywhere for many years.

These two benefits - elimination of cost and elimination of annual maintenance effort - imply that server operators can now deploy certificates for far more services than they would have previously.

The TLS chapter of the RTC Quick Start Guide has been updated with details about Let's Encrypt so anybody installing SIP or XMPP can use Let's Encrypt from the outset.

For example, somebody hosting basic Drupal or Wordpress sites for family, friends and small community organizations can now offer them all full HTTPS encryption, WebRTC, SIP and XMPP without having to explain annual renewal fees or worry about losing time in their evenings and weekends renewing certificates manually.

Even people who were willing to pay for a single certificate for their main web site may have snubbed their nose at the expense and ongoing effort of having certificates for their SMTP mail server, IMAP server, VPN gateway, SIP proxy, XMPP server, WebSocket and TURN servers too. Now they can all have certificates.

Early efforts at SIP were doomed without encryption

In the early days, SIP messages would be transported across the public Internet in UDP datagrams without any encryption. SIP itself wasn't originally designed for NAT and a variety of home routers were created with "NAT helper" algorithms that would detect and modify SIP packets to try and work through NAT. Sadly, in many cases these attempts to help actually clash with each other and lead to further instability. Conversely, many rogue ISPs could easily detect and punish VoIP users by blocking their calls or even cutting their DSL line. Operating SIP over TLS, usually on the HTTPS port (TCP port 443) has been an effective way to quash all of these different issues.

While the example of SIP is one of the most extreme, it helps demonstrate the benefits of making encryption universal to ensure stability and cut out the "man-in-the-middle", regardless of whether he is trying to help or hinder the end user.

Is one certificate enough?

Modern SIP, XMPP and WebRTC require additional services, TURN servers and WebSocket servers. If they are all operated on port 443 then it is necessary to use different hostnames for each of them (e.g. turn.example.org and ws.example.org. Each different hostname requires a certificate. Let's Encrypt can provide those additional certificates too, without additional cost or effort.

The future with Let's Encrypt

The initial version of the Let's Encrypt client, certbot, fully automates the workflow for people using popular web servers such as Apache and nginx. The manual or certonly modes can be used for other services but hopefully certbot will evolve to integrate with many other popular applications too.

Currently, Let's Encrypt only issues certificates to servers running on TCP port 443. This is considered to be a privileged port whereas any port over 1023, including the default ports used by applications such as SIP (5061), XMPP (5222, 5269) and TURN (5349), are not privileged ports. As long as Let's Encrypt maintains this policy, it is necessary to either run a web server for the domain associated with each certificate or run the services themselves on port 443. Running the services themselves on port 443 turns out to be a good idea anyway as it ensures that RTC services can be reached through HTTP proxy servers who fail to let the HTTP CONNECT method access any other ports.

Many configuration tasks are already scripted during the installation of packages on a GNU/Linux distribution (such as Debian or Fedora) or when setting up services using cloud images (for example, in Docker or OpenStack). Due to the heavily standardized nature of Let's Encrypt and the widespread availability of the tools, many of these package installation scripts can be easily adapted to find or create Let's Encrypt certificates on the target system, ensuring every service is running with TLS protection from the minute it goes live.

If you have questions about Let's Encrypt for RTC or want to share your experiences, please come and discuss it on the Free-RTC mailing list.

Dirk Eddelbuettel: Rcpp now used by over 700 CRAN packages

11 July, 2016 - 18:41

Earlier this morning, Rcpp reached another milestone: 701 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations). The graph is on the left depicts the growth of Rcpp usage over time.

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June of last year (when I only tweeted about it), 500 packages in late October and 600 packages exactly four months ago in March. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. Then next part uses manually saved entries, and the final and largest part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent last July, seven percent just before Christmas and now criss-crosses 8 eight percent, or a little less than one in twelve R packages.

700 user packages is a really large and humbling number. This places quite some responsibility on us in the Rcpp team as we continue to try our best try to keep Rcpp as performant and reliable as it has been.

So with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Simon Désaulniers: [GSOC] Week 5&6 Report

11 July, 2016 - 01:06

During week 5 and 6, I have been to the debian conference 2016. It was really interesting meeting with a lot of people all so involved in Debian.

Key signing party

Each year, this is a really important tradition: people gather together and exchange GPG public key fingerprint and sign each other’s keys. This contributes greatly to the growth of the web of trust.

I did exchange public key fingerprint with others. It was the first opportunity become more connected in the PGP web of trust. I think this is something that needs to make its way to the less technical people so that everyone can benefit from the network privacy features. There is support for some mail clients like Thunderbird, but I think there is still work to do there and mostly there is work to do about the opinion or point of view people have about encryption ; people don’t care enough and they don’t really know what encryption can do for them (digital signature, trust and privacy).

Ring now part of Debian

During the first week at debcamp, Alexandre Viau, an employee at Savoir-Faire Linux and a also a Debian developer (DD for short), has packaged Ring for Debian. Users can now install Ring like so:

$ sudo apt-get install ring

This is an important moment as more people are now going to easily try Ring and potentially contribute to it.

Presentating Ring

Alexandre Viau and I have been thinking about presenting Ring at debconf 2016. We finally did it.

  Your browser does not support the video tag.

Ritesh Raj Sarraf: Leprosy in India

11 July, 2016 - 01:06

During my recent travel, I had quite a long layover at the Doha International Airport in Qatar. While killing time, I watched an interesting programme on the Al Jazeera network.

The program aired on Al Jazeera is Lifelines. This special episode I watched, covered about "Leprosy in India". After having watched the entire programme, I felt the urge to blog about it.

First of all, it was quite depressing to me, to know through the programme, that the Govt. of India had officially marked "Leprosy" eradicated from India in 2005. As per Wikipedia, "Globally in 2012, the number of chronic cases of leprosy was 189,000, down from some 5.2 million in the 1980s. The number of new cases was 230,000. Most new cases occur in 16 countries, with India accounting for more than half." 

Of the data presented on Lifelines, they just covered a couple of districts from 2 States (of the 28+ States) of India. So, with many states remaining, and unserveyed, and uncounted, we are far away from making such statements.

Given that the Govt., on paper, has marked Leprosy eradicated; so does WHO. Which means that there is no more funding to help people suffering from the disease. And with no Govt. (or International) funding, it is a rather disappointing situation.

I come from a small town on the Indo-Nepal intl. border, named Raxaul. And here, I grew up seeing many people who suffered from Leprosy. As a child, I never much understood the disease, because as is mentioned in the programme, I was told it was a communicable disease. Those suffering, were (and are) tagged as Untouchables (Hindi:अछूत). Of my small town, there was and still is a sub-town, named Sunderpur. This town houses patients suffering from Leprosy, from around multiple districts closeby. I've never been there, but have been told that it is run by an NGO, and helps patients fight the disease.

Lifelines also reported that fresh surveys done by the Lepra society, just a 3 day campaign, resulted in 3808 new cases of people suffering from Leprosy. That is a big number, given accessibility to small towns only happens once a week on market day. And in 3 days the team hardly covered a couple of district towns.

My plea to the Media Houses of India. Please spend some time to look beyond the phony stuff that you mostly present. There are real issues that could be brought to a wider audience. As for the government, it is just depressing to know how rogue some/most of your statements are.

Categories: Keywords: Like: 

Ritesh Raj Sarraf: Leprosy in India

11 July, 2016 - 01:06

During my recent travel, I had quite a long layover at the Doha International Airport in Qatar. While killing time, I watched an interesting programme on the Al Jazeera network.

The program aired on Al Jazeera is Lifelines. This special episode I watched, covered about "Leprosy in India". After having watched the entire programme, I felt the urge to blog about it.

First of all, it was quite depressing to me, to know through the programme, that the Govt. of India had officially marked "Leprosy" eradicated from India in 2005. As per Wikipedia, "Globally in 2012, the number of chronic cases of leprosy was 189,000, down from some 5.2 million in the 1980s. The number of new cases was 230,000. Most new cases occur in 16 countries, with India accounting for more than half." 

Of the data presented on Lifelines, they just covered a couple of districts from 2 States (of the 28+ States) of India. So, with many states remaining, and unserveyed, and uncounted, we are far away from making such statements.

Given that the Govt., on paper, has marked Leprosy eradicated; so does WHO. Which means that there is no more funding to help people suffering from the disease. And with no Govt. (or International) funding, it is a rather disappointing situation.

I come from a small town on the Indo-Nepal intl. border, named Raxaul. And here, I grew up seeing many people who suffered from Leprosy. As a child, I never much understood the disease, because as is mentioned in the programme, I was told it was a communicable disease. Those suffering, were (and are) tagged as Untouchables (Hindi:अछूत). Of my small town, there was and still is a sub-town, named Sunderpur. This town houses patients suffering from Leprosy, from around multiple districts closeby. I've never been there, but have been told that it is run by an NGO, and helps patients fight the disease.

Lifelines also reported that fresh surveys done by the Lepra society, just a 3 day campaign, resulted in 3808 new cases of people suffering from Leprosy. That is a big number, given accessibility to small towns only happens once a week on market day. And in 3 days the team hardly covered a couple of district towns.

My plea to the Media Houses of India. Please spend some time to look beyond the phony stuff that you mostly present. There are real issues that could be brought to a wider audience. As for the government, it is just depressing to know how rogue some/most of your statements are.

Categories: Keywords: Like: 

Joey Hess: twenty years of free software -- part 12 propellor

11 July, 2016 - 00:29

Propellor is my second big Haskell program. I recently described the motivation for it like this, in a proposal for a Linux.Conf.Au talk:

The configuration of Linux hosts has become increasingly declarative, managed by tools like puppet and ansible, or by the composition of containers. But if a server is a collection of declarative properties, how do you make sure that changes to that configuration make sense? You can test them, but eventually it's 3 AM and you have an emergency fix that needs to go live immediately.

Data types to the rescue! While data types are usually used to prevent eg, combining an Int and a Bool, they can be used at a much more abstract level, for example to prevent combining a property that needs a Debian system with a property that needs a Red Hat system.

Propellor leverages Haskell's type system to prove the consistency of the properties it will apply to a host.

The real origin story though, is that I wanted to finally start using configuration management, but the tools for it all seemed very complicated and built on shakey foundations (like piles of yaml), and it seemed it would be easier to write my own than deal with that. Meanwhile, I had Haskell burning a hole in my pocket, ready to be used in a second large project after git-annex.

Propellor has averaged around 2.5 contributions per month from users since it got started, but increasing numbers recently. That's despite having many fewer users than git-annex, which remember gets perhaps 1 patch per month.

Of course, I've "cheated" by making sure that propellor's users know Haskell, or are willing to learn some. And, propellor is very compositional; adding a new property to it is not likely to be complicated by any of the existing code. So it's easy to extend, if you're able to use it.

At this point propellor has a small community of regular contributors, and I spend some pleasant weekend afternoons reviewing and merging their work.

Much of my best work on propellor has involved keeping the behavior of the program the same while making its types better, to prevent mistakes. Propellor's core data types have evolved much more than in any program I worked on before. That's exciting!

Next: ?twenty years of free software -- part 13 past and future

Simon Désaulniers: [GSOC] Week 3&4 Report

11 July, 2016 - 00:12

I have finally made a version of the queries code that can viably be integrated into the master branch of OpenDHT. I am awaiting my mentor’s approval and/or comments.

What’s done

Queries. The library will provide the additional following functions in its API:

void get(InfoHash id, GetCallbackSimple cb, DoneCallback donecb={}, Value::Filter f = Value::AllFilter(), Where w = {}) {
void query(const InfoHash& hash, QueryCallback cb, DoneCallback done_cb = {}, Query q = {});

The structure Where in the first signature will allow the user to narrow the set of values received through the network those that verify the “where” statement. The Where actually encapsulates a statement of the following SQL-ish form: SELECT * WHERE <some_field>=<some_field_value>.

Also, the DhtRunner::query function will let the user do something similar to what’s explained in the last paragraph, but where the returned data is encapsulated in FieldValueIndex structure instead of Value. This structure has a std::map<Value::Field, FieldValue>. You can think of the FieldValueIndex as a subset of fields of a given Value. The Query structure then allows you to create both Select and Where “statements”.

What’s next
  • Value pagination. I have begun working on this and I now have a more clear idea of the first step to accomplish. I have to redesign the ‘get’ operation callbacks calling process by making a callback execution per SearchNode instead of per Search (list of SearchNode). This will let us properly write the value pagination code with node concurrency in mind. This will therefor make sure we don’t request a value from a node if it doesn’t stores it;
  • Optimizing announce operations;

Bits from Debian: New Debian Developers and Maintainers (May and June 2016)

10 July, 2016 - 22:30

The following contributors got their Debian Developer accounts in the last two months:

  • Josué Ortega (josue)
  • Mathias Behrle (mbehrle)
  • Sascha Steinbiss (satta)
  • Lucas Kanashiro (kanashiro)
  • Vasudev Sathish Kamath (vasudev)
  • Dima Kogan (dkogan)
  • Rafael Laboissière (rafael)
  • David Kalnischkies (donkult)
  • Marcin Kulisz (kula)
  • David Steele (steele
  • Herbert Parentes Fortes Neto (hpfn)
  • Ondřej Nový (onovy)
  • Donald Norwood (donald)
  • Neutron Soutmun (neutrons)
  • Steve Kemp (skx)

The following contributors were added as Debian Maintainers in the last two months:

  • Sean Whitton
  • Tiago Ilieve
  • Jean Baptiste Favre
  • Adrian Vondendriesch
  • Alkis Georgopoulos
  • Michael Hudson-Doyle
  • Roger Shimizu
  • SZ Lin
  • Leo Singer
  • Peter Colberg

Congratulations!

Paul Tagliamonte: SNIff

10 July, 2016 - 20:34

A while back, I found myself in need of two webservers that would terminate TLS (with different rules). I wanted to run some custom code I’d written (which uses TLS peer authentication), and also nginx on port 443.

The best way I figured out how to do this was to write a tool to sit on port 443, and parse TLS Client Hello packets, and dispatch to the correct backend depending on the SNI name.

SNI, or Server Name Indication allows the client to announce (yes over cleartext!) what server it’s looking for, similar to the HTTP Host header. Sometimes, like in the case above, the Host header won’t work, since you’ve already done a TLS handshake by the time you figure out who they’re looking for.

I also spun the Client Hello parser out into its own importable package, just in case someone else finds themselves in this same boat.

The code’s up on github.com/paultag/sniff!

Sune Vuorela: Let Qt models meet std::vector<std::tuple<…>>

10 July, 2016 - 20:27

The problem

So. I was stuck with a container of tuples that I wanted to see in a Qt view (QTableView, QtQuick ListView or similar). So how to do that?

Another problem: I haven’t been doing fun things with templates recently.

A solution?

After a bit of hacking, it seems like it can just be done like

typedef std::tuple<std::string, QString> Element;
typedef std::vector List;
List list = { std::make_tuple("first", "second"), 
              std::make_tuple("third", "fourth");
std::unique_ptr<TableModel> magic = createTableModel(list);
QTableView view;
view.setModel(magic->model());

and … tada:

Of course, we are also QtQuick friendly

std::unique_ptr<ListModel>List>> magic = createListModel(list);
// expose magic->model() to your quickview

and a delegate containing the following

Text {
    text: role0
}
Text {
    text: role1
}

 

can give:

But enough about creation.

Whattabout manipulation?

Luckily we got you covered. Insert two extra rows at position 1?

auto lines = { std::make_tuple("extra", "extra"),
               std::make_tuple("extra2","extra2") };
magic->insertRows(1,lines.begin(), lines.end());

Append a row?

magic->appendRow(std::make_tuple("",""));

Remove 2 rows at position 3?

magic->removeRow(3,2);

Replace the underlying list?

List newList;
// fill list
magic->reset(newList);

Read-only looping over the elements?

for(const Element& e : magic->list())
{
    ...
}

The Qt model of course also accepts setData calls.

Future?

If anyone is interested I will polish the code a bit and publish it. If that’s the case, how should I name this thing?

And I did get around doing fun things with templates again.

Michal &#268;iha&#345;: Weblate 2.7

10 July, 2016 - 16:00

Slightly later than on monthly schedule but Weblate 2.7 is out today. This release brings improvements to the API and is first to officially support wlc a command line client for Weblate.

Full list of changes for 2.7:

  • Removed Google web translate machine translation.
  • Improved commit message when adding translation.
  • Fixed Google Translate API for Hebrew language.
  • Compatibility with Mercurial 3.8.
  • Added import_json management command.
  • Correct ordering of listed traslations.
  • Show full suggestion text, not only a diff.
  • Extend API (detailed repository status, statistics, ...).
  • Testsuite no longer requires network access to test repositories.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Aptoide, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate | 0 comments

Craig Small: procps 3.3.12

10 July, 2016 - 12:58

The procps developers are happy to announce that version 3.3.12 of procps was released today. This version has a mixture of bug fixes and enhancements. This unfortunately means another API bump but we are hoping this will be fixed with the new library API coming soon.

procps is developed on gitlab and the new version of procps can be found at https://gitlab.com/procps-ng/procps/tree/newlib

procps 3.3.12 can be found at https://gitlab.com/procps-ng/procps/tags/v3.3.12

From the NEWS file, procps 3.1.12 has the following:

  • build: formerly optional –enable-oomem unconditional
  • free: man document rewritten for shared Debian #755233
  • free: interpret intervals in non-locale way Debian #692113
  • kill: report error if cannot kill process Debian #733172
  • library: refine calculation of ‘cached’ memory
  • library: find tty quicker Debian #770215
  • library: eliminate threads display inconsistencies Redhat #1284091
  • pidof: check cmd if space found in argv0
  • pmap: fixed detail parsing on long mapping lines
  • pmap: fix occasional incorrect memory usage values Redhat #1262864
  • ps: sort by cgroup Debian #692279
  • ps: display control group name with -o cgname
  • ps: fallback to attr/current for context Debian #786956
  • ps: enabled broken ‘thcount’ option Redhat #1174313
  • tests: conditionally add prctl Debian #816237
  • top: displays the 3 new linux-4.5 RES memory fields
  • top: man page memory fields corrected + new narrative
  • top: added display of CGNAME (control group name)
  • top: is now more responsive to cpus brought online
  • top: namespace cols use suppressible zero

We are hoping this will be the last one to use the old API and the new format API ( imaginatively called newlib ) will be used in subsequent releases.

Feedback for this and any other version of procps can be sent to either the issue tracker or the development email list.

Norbert Preining: OpenPHT 1.6.2 for Debian/sid

10 July, 2016 - 11:17

I have updated the openpht repository with builds of OpenPHT 1.6.2 for Debian/sid for both amd64 and i386 architecture. For those who have forgotten it, OpenPHT is the open source fork of Plex Home Theater that is used on RasPlex, see my last post concerning OpenPHT for details.

The repository also contains packages (source and amd64/i386) for shairplay which is necessary for building and running OpenPHT.

sid and testing

For sid use the following lines:

deb http://www.preining.info/debian/ openpht-sid main
deb-src http://www.preining.info/debian/ openpht-sid main

You can also grab the binary for amd64 directly here for amd64 and i386, you can get the source package with

dget http://www.preining.info/debian/pool/main/o/openpht/openpht_1.6.2.20160707-1.dsc

Note that if you only get the binary deps, you also need libshairplay0 from amd64 or i386.

The release file and changes file are signed with my official Debian key 0x860CDC13.

jessie

Builds for Debian stable release jessie are available directly from the github project page of OpenPHT

Now be ready for enjoying the next movie!

Clint Adams: “Progress”

10 July, 2016 - 05:43

When you replace mutt-kz with mutt 1.6.1-2, you may notice a horribly ugly thing appear. Do not panic; just add unset sidebar_visible to your ~/.mutt/muttrc .

Matthew Garrett: "I recieved a free or discounted product in return for an honest review"

10 July, 2016 - 02:09
My experiences with Amazon reviewing have been somewhat unusual. A review of a smart switch I wrote received enough attention that the vendor pulled the product from Amazon. At the time of writing, I'm ranked as around the 2750th best reviewer on Amazon despite having a total of 18 reviews. But the world of Amazon reviews is even stranger than that, and the past couple of weeks have given me some insight into it.

Amazon's success is fairly phenomenal. It's estimated that there's over 50 million people in the US paying $100 a year to get free shipping on Amazon purchases, and combined with Amazon's surprisingly customer friendly service there's a lot of people with a very strong preference for choosing Amazon rather than any other retailer. If you're not on Amazon, you're hurting your sales.

And if you're an established brand, this works pretty well. Some people will search for your product directly and buy it, leaving reviews. Well reviewed products appear higher up in search results, so people searching for an item type rather than a brand will still see your product appear early in the search results, in turn driving sales. Some proportion of those customers will leave reviews, which helps keep your product high up in the results. As long as your products aren't utterly dreadful, you'll probably maintain that position.

But if you're a brand nobody's ever heard of, things are more difficult. People are unlikely to search for your product directly, so you're relying on turning up in the results for more generic terms. But if you're selling a more generic kind of item (say, a Bluetooth smart bulb) then there's probably a number of other brands nobody's ever heard of selling almost identical objects. If there's no reason for anybody to choose your product then you're probably not going to get any reviews and you're not going to move up the search rankings. Even if your product is better than the competition, a small number of sales means a tiny number of reviews. By the time that number's large enough to matter, you're probably onto a new product cycle.

In summary: if nobody's ever heard of you, you need reviews but you're probably not getting any.

The old way of doing this was to send review samples to journalists, but nobody's going to run a comprehensive review of 3000 different USB cables and even if they did almost nobody would read it before making a decision on Amazon. You need Amazon reviews, but you're not getting any. The obvious solution is to send review samples to people who will leave Amazon reviews. This is where things start getting more dubious.

Amazon run a program called Vine which is intended to solve this problem. Send samples to Amazon and they'll distribute them to a subset of trusted reviewers. These reviewers write a review as normal, and Amazon tag the review with a "Vine Voice" badge which indicates to readers that the reviewer received the product for free. But participation in Vine is apparently expensive, and so there's a proliferation of sites like Snagshout or AMZ Review Trader that use a different model. There's no requirement that you be an existing trusted reviewer and the product probably isn't free. You sign up, choose a product, receive a discount code and buy it from Amazon. You then have a couple of weeks to leave a review, and if you fail to do so you'll lose access to the service. This is completely acceptable under Amazon's rules, which state "If you receive a free or discounted product in exchange for your review, you must clearly and conspicuously disclose that fact". So far, so reasonable.

In reality it's worse than that, with several opportunities to game the system. AMZ Review Trader makes it clear to sellers that they can choose reviewers based on past reviews, giving customers an incentive to leave good reviews in order to keep receiving discounted products. Some customers take full advantage of this, leaving a giant number of 5 star reviews for products they clearly haven't tested and then (presumably) reselling them. What's surprising is that this kind of cynicism works both ways. Some sellers provide two listings for the same product, the second being significantly more expensive than the first. They then offer an attractive discount for the more expensive listing in return for a review, taking it down to approximately the same price as the original item. Once the reviews are in, they can remove the first listing and drop the price of the second to the original price point.

The end result is a bunch of reviews that are nominally honest but are tied to perverse incentives. In effect, the overall star rating tells you almost nothing - you still need to actually read the reviews to gain any insight into whether the customer actually used the product. And when you do write an honest review that the seller doesn't like, they may engage in heavy handed tactics in an attempt to make the review go away.

It's hard to avoid the conclusion that Amazon's review model is broken, but it's not obvious how to fix it. When search ranking is tied to reviews, companies have a strong incentive to do whatever it takes to obtain positive reviews. What we're left with for now is having to laboriously click through a number of products to see whether their rankings come from thoughtful and detailed reviews or are just a mass of 5 star one liners.

comments

Pages

Creative Commons License ลิขสิทธิ์ของบทความเป็นของเจ้าของบทความแต่ละชิ้น
ผลงานนี้ ใช้สัญญาอนุญาตของครีเอทีฟคอมมอนส์แบบ แสดงที่มา-อนุญาตแบบเดียวกัน 3.0 ที่ยังไม่ได้ปรับแก้