Planet Arch Linux

primus_vk>=1.3-1 update requires manual intervention

November 25, 2019 01:03 PM

The primus_vk package prior to version 1.3-1 was missing some soname links. This has been fixed in 1.3-1 so the upgrade will need to overwrite the untracked soname links. If you get an error like:

primus_vk: /usr/lib/ exists in filesystem
primus_vk: /usr/lib/ exists in filesystem

when updating, use:

pacman -Syu --overwrite=/usr/lib/,/usr/lib/

to perform the upgrade.

Giancarlo Razzolini@Official News

Arch Conf 2019 Report

November 17, 2019 02:00 PM

During the 5th and 6th of October, 21 team members attended the very first internal Arch Conf. We spent 2 days at Native Instruments in Berlin having workshops, discussions and hack sessions together. We even managed to get into, and escape, an escape room! It was a great and productive weekend which we hope will continue in the next years. Hopefully we will be able to expand on this in the future and include more community members and users.
Conference Posts

Reproducible Arch Linux Packages

November 11, 2019 11:00 AM

Arch Linux has been involved with the reproducible builds efforts since 2016. The goal is to achieve deterministic building of software packages to enhance the security of the distribution. After almost 3 years of continued effort, along with the release of pacman 5.2 and contributions from a lot of people, we are finally able to reproduce packages distributed by Arch Linux! This enables users to build packages and compare them with the ones distributed by the Arch Linux team.
Morten Linderud

New kernel packages and mkinitcpio hooks

November 10, 2019 09:41 PM

All our official kernels: linux, linux-lts, linux-zen and linux-hardened, do not install the actual kernel to /boot anymore.

The installation is done by mkinitcpio hooks and scripts, as well as removals. There is no need for any manual intervention.

The intention is to make the kernel packages more self-contained, as well as making the boot process more flexible, while also keeping it backwards compatible.

As of now, only mkinitcpio has hooks for handling kernels installations and removals. We do not ship any for dracut yet, but it will have similar hooks in the near future.

Giancarlo Razzolini@Official News

Clarification regarding recent email activity on the arch-announce list

October 25, 2019 08:27 PM

Today, one email was sent to the arch-announce mailing list that was able to circumvent the whitelisting checks that are done by the mailman software. This was not due to unauthorized access and no Arch Linux servers were compromised.

We have implemented measures to make sure this does not happen again, by using mailman's poster password feature. We are also making sure, these simple whitelist checks are not used anywhere else.

Edited to add: There was a second email that was also sent today, in order to make sure the poster password feature was working. That email did not circumvent any check and was intentionally sent.

Giancarlo Razzolini@Official News

Pacman 5.2 Release

October 21, 2019 11:48 AM

Nothing like a new pacman release to make me locate the password to this site…

Tradition dictates I thank people who have contributed to the release (as well as genuinely meaning the thanks!). We had 29 people have a patch committed this release, with a few new names. Here is the top ten:

$ git shortlog -n -s v5.1.0..v5.2.0 | head -n10
   108  Eli Schwartz
    38  Allan McRae
    30  morganamilo
    24  Andrew Gregory
    20  Dave Reisner
     9  Jan Steffens
     6  Michael Straube
     4  Jonas Witschel
     4  Luke Shumaker
     3  Que Quotion

We have a clear winner. Although I’m sure that at least half of those are in responses to bugs he created! He claims it is a much smaller proportion… And a new contributor in third.

What has changed in this release? Nothing super exciting as far as I’m concerned, but check out the detailed list here.

We have completely removed support for delta packages. This was a massively underused feature, usually made updates slower for a slight saving on bandwidth, and had a massive security hole. Essentially, a malicious package database in combination with delta packages could run arbitrary commands on your system. This would be less of an issue if a certain Linux distro signed their package databases… Anyway, on balance I judged it better to remove this feature altogether. We may come back to this in the future with a different implementation, but I would not expect that any time soon. Note a similar vulnerability was found with using XferCommand to download packages, but we plugged that hole instead of removing it!

Support for downloading PGP keys using the new Web Key Directory (WKD) was added to pacman. Both pacman-key and makepkg will also look there by default with the latest GnuPG release. This prevents DoS attacks through people adding very large numbers of signatures to PGP keys. The attack scope was limited for Arch Linux anyway, as most people obtain the pacman keyring through the archlinux-keyring package.

The much maligned --force made its way to /dev/null. The --overwrite option has been a replacement for over a year and is a precision surgical instrument compared to the blunt hammer of --force.

There is a small user interface change for searching files databases with -F. Specifying the -s option was redundant, so removed. More information such as package group and installed status is shown in the search results, bringing the output inline with -Ss.

The split of makepkg into smaller and extendable components continued. You can now provide new source download and signature verification routines (e.g. if you are living in the past and want to support cvs:// style URLs). We also added support for lzip, lz4 and zst compressed packages. Arch Linux will switch zst by default in the near future.

Under the hood, we are in the process of changing our build system from autotools to meson. This is relatively complete, but there still was a decent churn of patches to meson files as we approached release. You can build pacman from the release tarball using meson if you want to test. Next release is likely to be meson only. (Edit: you can’t test meson with the 5.2.0 tarball as it is missing a couple of the meson build files.)

Expect the release to land in Arch Linux “soon”. Expect to see another blog post in a year or so when I make the next release…

Allan@Allan McRae

Required update to recent libarchive

October 16, 2019 12:43 PM

The compression algorithm zstd brings faster compression and decompression, while maintaining a compression ratio comparable with xz. This will speed up package installation with pacman, without further drawbacks.

The imminent release of pacman 5.2 brings build tools with support for compressing packages with zstd. To install these packages you need libarchive with support for zstd, which entered the repositories in September 2018. In order for zstd compressed packages to be distributed, we require all users to have updated to at least libarchive 3.3.3-1. You have had a year, so we expect you already did update. Hurry up if you have not.

If you use custom scripts make sure these do not rely on hardcoded file extensions. The zstd package file extension will be .pkg.tar.zst.

Christian Hesse@Official News

`base` group replaced by mandatory `base` package - manual intervention required

October 06, 2019 10:09 AM

The base group has been replaced by a metapackage of the same name. We advise users to install this package (pacman -Syu base), as it is effectively mandatory from now on.

Users requesting support are expected to be running a system with the base package.

Be aware that base as it stands does not currently contain:
- A kernel
- An editor
... and other software that you might expect. You will have to install these separately on new installations.

Robin Broda@Official News

astyle>=3.1-2 update requires manual intervention

August 26, 2019 06:39 AM

The astyle package prior to version 3.1-2 was missing a soname link. This has been fixed in 3.1-2, so the upgrade will need to overwrite the untracked soname link created by ldconfig. If you get an error

astyle: /usr/lib/ exists in filesystem

when updating, use

pacman -Suy --overwrite usr/lib/

to perform the upgrade.

Antonio Rojas@Official News

tensorflow>=1.14.0-5 update requires manual intervention

August 20, 2019 10:22 PM

The tensorflow packages prior to version 1.14.0-5 were missing some soname links. This has been fixed in 1.14.0-5, so the upgrade will need to overwrite the untracked soname links created by ldconfig. If you get an errors like so

tensorflow: /usr/lib/ exists in filesystem
tensorflow: /usr/lib/ exists in filesystem
tensorflow: /usr/lib/ exists in filesystem

when updating, use

pacman -Suy --overwrite=usr/lib/,usr/lib/,usr/lib/

to perform the upgrade.

Konstantin Gizdov@Official News

E-ink home display

July 22, 2019 07:37 PM

I've always wanted an e-ink status display in my living room to view the weather forecast, news and public transport information. Previously I've used a SHA2017 Badge with the following app which showed a weather forecast for the following four days. So I've decided to scale up to a nice 7.5" e-ink screen which I can hang on the wall. To control the e-ink screen I've taken an Raspberry Pi Zero W since it's easier to develop with then an ESP32. To hold the e-ink screen I've gotten an Ikea RRIBBA which perfectly fits the e-ink screen and leaves enough space to fit an e-ink SPI controller and a RaspberryPi.

e-ink back panel

When I started playing around with drawing images on the e-ink screen with the official Waveshare Python driver, I noticed a blank and an image update took around 50 seconds with 100% cpu. This is too slow for a status display so I started profiling with a simple test program. The Python profiler concluded that writebytes was called for the most of the time, which is a function of the python SPIDev module. It does a write call to the SPI device for every pixel individually which was the first issue to tackle. A newer version of this driver included the 'writebytes2' function which can write a Python iterable at once, this led to a significant improvement in this commit.

Waveshare also sells e-ink panels with a third color which lead to unrequited looping since my panel is black and white. The example code first clears the panel, then generates a buffer and writes it to the device simply generating the buffer up front saved a small amount of "panel updating" time. The code to generate the buffer was also optimized.

After all these changes the panel updates with a Raspberry Pi Zero W in ~ 10 seconds and a tiny bit faster on a Raspberry Pi 3 in ~ 8 seconds. The driver code can be viewed here. Now all that was left is to write my own status page for my living room. The e-ink panel fetches my local weather, public transport and Dutch news the code which drives the display below can be read here. The final display can be viewed below, the frame hangs on a nail with a barrel jack connector for a 5V power supply.

e-ink wall mount

In the future I would like to include a graph of the predicted rain for the following hour since cycling in the rain isn't always fun :-)

Jelle van der Waa ( Van der Waa

libbloom>=1.6-2 update requires manual intervention

July 11, 2019 01:07 PM

The libbloom package prior to version 1.6-2 was missing a soname link. This has been fixed in 1.6-2, so the upgrade will need to overwrite the untracked soname link created by ldconfig. If you get an error

libbloom: /usr/lib/ exists in filesystem

when updating, use

pacman -Suy --overwrite usr/lib/

to perform the upgrade.

Felix Yan@Official News

Reproducing Arch [core] repository packages

June 27, 2019 04:37 PM

As Arch Linux we are working on reproducible builds for a while and have a continuous test framework rebuilding package updated in our repositories. This test does an asp checkout of a package and builds it twice in a schroot, we do not try to reproduce actual repository packages yet. In the end this is however what we want to achieve, giving users the ability to verify a repository package by rebuilding it on their own hardware.

repro was created to achieve this goal, it creates a build chroot with the packages installed during build (from the .BUILDINFO file), sets SOURCE_DATE_EPOCH accordingly, fetches the correct PKGBUILD and then builds the package. This tool however does not run in a CI environment yet, so a bash script was hacked together to build all our [core] (232) packages one by one leading to 0% reproducibility with the following issues:

  • makepkg options differed, these options are recorded in BUILDINFO but not set yet by repro.
  • Packages where not reproducible (108 due to makepkg recording false sizes in .PKGINFO).
  • PKGBUILD fetching logic failed (21 packages).
  • Failed to download source files due to DNS issues (popt, libpipeline, acl, mlocate).
  • Packages did not build due to OOM and other issues (lib32-gcc-libs, gcc-obj, gcc-libs, gcc-go, gcc-fortran, gcc, fakeroot).
  • asp failed to get package due unknown reasons (libusb).
  • Packages not reproducible (s-nail, amd-ucode, syslinux, texinfo, tzdata, patch, .. and more).
  • libpcap GPG verification failed.
  • Builds with different packages installed leading to a different BUILDINFO due to an issue in repro (unknown).

Logs of the process can be found here.

This shows that still a lot has still to be done for reproducible Arch Linux, in the next pacman release the size issue should be resolved. Which will lead to at least some reproducible packages! Repro has to be improved and non reproducible packages sorted out. In a few months I intend to retry reproducing [core] packages and have at least > 0% reproducibility!

Jelle van der Waa ( Van der Waa

mariadb 10.4.x update requires manual intervention

June 27, 2019 01:40 PM

The update to mariadb 10.4.6-1 and later changes configuration layout as recommended by upstream.

The main configuration file moved from /etc/mysql/my.cnf (and its include directory /etc/mysql/my.cnf.d/) to /etc/my.cnf (and /etc/my.cnf.d/). Make sure to move your configuration.

Instantiated services (like mariadb@foo.service) are no longer configured in separate files (like /etc/mysql/myfoo.cnf). Instead move your configuration to configuration blocks with group suffix in main configuration file, one for each service. A block should look something like this:

datadir = /var/lib/mysql-foo
socket = /run/mysqld/mysqld-foo.sock

Like every mariadb feature update this requires the data directory to be updated. With the new configuration in place run:

systemctl restart mariadb.service && mariadb-upgrade -u root -p
Christian Hesse@Official News

Mini DebConf Hamburg 2019

June 20, 2019 11:37 AM

The reproducible builds project was invited to join the mini DebConf Hamburg sprints and conference part. I attended with the intention to get together to work on Arch Linux reproducible test setup improvements, reproducing more packages and comparing results.

The first improvement was adding JSON status output for Arch Linux and coincidently also OpenSUSE and in the future Alpine the commit can be viewed here. The result was deployed and the Arch Linux JSON results are live.

The next day, I investigated why Arch Linux's kernel is not reproducible. The packaging requires a few changes for partial reproducibility:

export KBUILD_BUILD_HOST="arch"

One of the remaining issue is CONFIG_MODULE_SIG_ALL which signs all kernel modules to allow loading of only signed kernel modules. If there is no private key specified a key will be generated which is always non-reproducible. A solution for this problem hasn't been found, as providing a key in the repository might also be non-optimal. Apart from this issue, the vmlinuz-linux image is also non-reproducible which needs to be further investigated.

Further packages where investigated which currently do not reproduce in our test framework.

  • s-nail due to recording of MAKEFLAGS which is under investigation for fixing.

  • keyutils was fixed for embedding the build date in it's binary with this patch

  • nspr has been made reproducible in Arch Linux with the following change.

Plans where made to extend the reproducible builds test framework for Arch Linux and start reproducing real repository packages on the test framework. Pacman was also packaged for Debian inclusion so that it's easier to bootstrap Arch containers/chroots from a Debian install.

A big thanks to all the organizers of mini DebConf Hamburg for organizing the event!

Jelle van der Waa ( Van der Waa

Arch Conf in October

May 26, 2019 02:19 PM

Fellow Archers! We are happy to announce, that we will be hosting a community-centric (developers, trusted users and support staff) event in October (the weekend of the 5th and 6th to be exact) this year. For more information, see the about page. Arch Conf will happen at Native Space (the community space of Native Instruments GmbH). For details on the exact location and how to get there, please check out the travel page.
Conference Posts

External encrypted disk on LibreELEC

May 05, 2019 12:00 AM

Last year I replaced, on the Raspberry Pi, the ArchLinux ARM with just Kodi installed with LibreELEC.

Today I plugged an external disk encrypted with dm-crypt, but to my full surprise this isn’t supported.

Luckily the project is open source and sky42 already provides a LibreELEC version with dm-crypt built-in support.

Once I flashed sky42’s version, I setup automated mount at startup via the script and the corresponding umount via this way:

// copy your keyfile into /storage via SSH
$ cat /storage/.config/
cryptsetup luksOpen /dev/sda1 disk1 --key-file /storage/keyfile
mount /dev/mapper/disk1 /media

$ cat /storage/.config/
umount /media
cryptsetup luksClose disk1

Reboot it and voilà!


If you want to automatically mount the disk whenever you plug it, then create the following udev rule:

// Find out ID_VENDOR_ID and ID_MODEL_ID for your drive by using `udevadm info`
$ cat /storage/.config/udev.rules.d/99-automount.rules
ACTION=="add", SUBSYSTEM=="usb", SUBSYSTEM=="block", ENV{ID_VENDOR_ID}=="0000", ENV{ID_MODEL_ID}=="9999", RUN+="cryptsetup luksOpen $env{DEVNAME} disk1 --key-file /storage/keyfile", RUN+="mount /dev/mapper/disk1 /media"
Andrea Scarpino

Automated phone backup with Syncthing

May 04, 2019 12:00 AM

How do you backup your phones? Do you?

I use to perform a copy of all the photos and videos from my and my wife’s phone to my PC monthly and then I copy them to an external HDD attached to a Raspberry Pi.

However, it’s a tedious job mainly because: - I cannot really use the phones during this process; - MTP works one in 3 times - often I have to fallback to ADB; - I have to unmount the SD cards to speed up the copy; - after I copy the files, I have to rsync everything to the external HDD.

The Syncthing way

Syncthing describes itself as:

Syncthing replaces proprietary sync and cloud services with something open, trustworthy and decentralized.

I installed it to our Android phones and on the Raspberry Pi. On the Raspberry Pi I also enabled remote access.

I started the Syncthing application on the Android phones and I’ve chosen the folders (you can also select the whole Internal memory) to backup. Then, I shared them with the Raspberry Pi only and I set the folder type to “Send Only” because I don’t want the Android phone to retrieve any file from the Raspberry Pi.

On the Raspberry Pi, I accepted the sharing request from the Android phones, but I also changed the folder type to “Receive Only” because I don’t want the Raspberry Pi to send any file to the Android phones.

All done? Not yet.

Syncthing main purpose is to sync, not to backup. This means that, by default, if I delete a photo from my phone, that photo is gone from the Raspberry Pi too and this isn’t what I do need nor what I do want.

However, Syncthing supports File Versioning and best yet it does support a “trash can”-like file versioning which moves your deleted files into a .stversions subfolder, but if this isn’t enough yet you can also write your own file versioning script.

All done? Yes! Whenever I do connect to my own WiFi my photos are backed up!

Andrea Scarpino

Arch signoff

April 02, 2019 07:37 PM

Arch sign off tool

Since some time Arch has been letting users become testers which can sign off packages in [testing] repository's. The idea behind allowing users and not only the Arch team sign off packages as known good is that packages can be moved earlier or bugs and issues found earlier. To sign off a package you need to login into Arch Linux's website and go to the sign off page to sign off a package. Haavard created a tool to be able to sign off packages from the command line which makes it easier to sign off by doing it interatively.

This tool has now been adopted by Arch as the official sign off tool and has been packaged in the extra repository. Issues can be reported here.

If you want to become an Arch Linux tester, feel free to apply here. A special thanks goes out to the current testing team and haavard for creating this awesome tool!

Jelle van der Waa ( Van der Waa

My new hobby

March 06, 2019 09:22 PM

A few years ago, sitting in an emergency room, I realized I'm not getting any younger and if I want to enjoy some highly physical outdoor activities for grownups these are the very best years I have left to go and do them. Instead of aggravating my RSI with further repetitive motions on the weekends (i.e. trying to learn how to suck less at programming) I mostly wrench on an old BMW coupe and drive it to the mountains (documenting that journey, and the discovery of German engineering failures, was best left to social media and enthusiast forums).

Around the same time I switched jobs, and the most interesting stuff I encounter that I could write about I can't really write about, because it would disclose too much about our infrastructure. If you are interested in HAProxy for the enterprise you can follow development on the official blog.

anrxc@Adrian Caval

Bug Day 2019

January 01, 2019 05:20 PM

Hey all. smile

We will be holding a bug day on the weekend of January 5th and 6th, to start off the year with a cleaned up bugtracker.

The community is encouraged to canvass the bugtracker and find old bugs, figure out which ones are still valid, track down fixes and suchlike. smile

Feel free to join #archlinux-bugs at that time in order to reach a bug wrangler and get more input on a bug. Or just post to the bug tracker.

Links: … 29410.html
Open bugs, sorted by last edit date: core/extra and community

eschwartz@Forum Announcements

Arch Linux @ Reproducible Build Summit Paris

December 13, 2018 10:37 PM

Write up of the reproducible summit

Three members of the Arch Linux team attended the Reproducible Build Summit 2018 in Paris this week to work together with the reproducible ecosystem to work on reproducible build issues. The other participants where from a lot of different projects and companies such as Debian, NixOS, Guix, Alpine, openSUSE, OpenWrt, Google, Microsoft and many more. The summit was organized by letting attendees work with a small subset of the attendees on issues which they are interested in and trying to find solutions and discuss ideas. At the end of the day there time for hacking together on solutions. The event was very open and there was a lot of collaboration between projects which have different goals!

The Arch Team has worked on the following topics:

  • Packaging & updating more reproducible build tools in our repos, disorderfs was updated to the latest version and disorderfs was updated after a pytest fix from Chris Lamb for diffoscope. Reprotest, the tool to test if something is reproducible has been added to [community].
  • A note has been made that we should investigate if the Arch ISO is reproducible. At least one possible issue is that squashfs images are not reproducible and Arch should consider switching to squashfskit which creates reproducible squashfs images.
  • Discussed adding a JSON endpoint for fetching the reproducible build status of Arch Linux packages on
  • Sharing reproducible build issues cross distros.
  • Discussed how to rebuild Arch Linux packages and test if they are reproducible.
  • Discussed how to verify before installing a package if a package is reproducible.
  • Debian's Kernel is reproducible, but Arch's isn't. We started investigating why ours isn't reproducible, as one goal is to get [core] reproducible as first repo.
  • Investigate PGO (profile guided optimisation) reproducibility issues for Firefox and Python.

And much more! It has left us with a lot of "homework" to continue making Arch Linux more reproducible!

A huge thanks to the organizers and sponsors of the Reproducible build summit!

Jelle van der Waa ( Van der Waa

Arch Linux ARM on the Allwinner NanoPi A64

December 09, 2018 12:37 PM

Arch Linux ARM on a NanoPi A64

I've obtained two NanoPi A64's a long while ago and recently thought of setting them up as a HA cluster as an exercise. Since setting it up with real hardware is a lot more fun then with VM's or containers. And I wanted to try out aarch64 and see how well that fares on mainline Linux.

The first part of setting it up created the partitions and rootfs on the sd card. For this I've just followed the "Generic AArch64 Installation". The more challenging part was setting up U-boot, clone it and follow the 64 bit board instructions. All that is required now is to install a boot.scr file in /boot on the sdcard, download the boot.cmd file and create a boot.scr with mkimage from uboot-tools with mkimage -C none -A arm64 -T script -d boot.cmd boot.scr.

That should get the NanoPi A64 booting, note that 4.20 is required for the ethernet controller to work, luckily Arch Linux ARM offers an linux-rc package since as of writing this article 4.20 is still not released yet.

Jelle van der Waa ( Van der Waa

archlinux-keyring update required before December 1 2018

October 18, 2018 06:39 PM

archlinux-keyring 20181018-1 re-enables my PGP key for packaging. As any package updates on my behalf requires this version (or greater) to proceed without errors, users should update archlinux-keyring before December 1 2018.

Prior to this date, there will be no new packages signed by my key. The list of affected packages: … ainer=Alad


Alad@Forum Announcements

packer renamed to packer-aur

August 14, 2018 04:31 PM

The famous AUR helper `packer` has been renamed to `packer-aur` in favor of the Hashicorp image builder `packer` (community/packer)

Shibumi@Forum Announcements

Arch User Magazine

August 03, 2018 11:37 AM

A blast from the past, the Arch User Magazine

It's almost 10 years ago that Ghost1227 created the Arch User Magazine and this week I got reminded about it's existence. I found that the original domain where the magazine was hosted was no longer owned by Ghost1227, but by using the way back machine I was able to retrieve two of the three editions of the magazine.

The original forum thread about the first magazine can be found here and the first and second magazine. There should be a third edition, but I couldn't find it via the way back machine.

Enjoy reading this part of Arch history and I hope someone recreates the user magazine!

Jelle van der Waa ( Van der Waa

Arch monthly July

August 01, 2018 03:00 PM

Archweb updates

The Arch Linux website has been updated and it's search functionality was expanded to make it able to find the 'archlinux-keyring' by searching for 'archlinux keyring'. This was contributed by an external!. Another small visual improvement was made by removing some empty spaces in provides.


AURpublish was added to [community] by eschwartz, a tool to manage your AUR packages.

Dropping luxrender packages

Lukas proposed dropping luxrays, luxrender and luxblend25 packages from [community]. The proposal went through without opposition and embree2 was also dropped in the process.

Python 3.7 in [testing]

Python 3.7 finally landed in [testing] after a painful rebuild period with many packages requiring fixes due to the async keyword or C ABI/Compiler changes.

Enforcing 2FA on Github

This does not impact Arch, but the Github repo used for the development of the Arch security Tracker, website and some mirroring now enforces 2FA in light of the recent Gentoo Github repo incident

Removal of openjdk 9, phasing out Java 7

Anthraxx removed openjdk 9 from the repos since it is EOL and nothing depends on it. Java 7 will be phased out as well soon.

New TU

Filipe Laíns has been accepted as a new TU, read his proposal and results here.

New TU applicant

A new application has arrived, voting is currently underway.

Acroread package compromised

The acroread package was compromised by a user who took over the orphan package and uploaded a new version with

aurweb 4.7.0

A new version of the AUR is deployed with new features and bugfixes.

Linux package source moved

Linux package source moved to github along with changes in the PKGBUILD.

Pacman 5.1.1 release

Pacman 5.1.1 was released containing several bugfixes.

Jelle van der Waa ( Van der Waa

libutf8proc>=2.1.1-3 update requires manual intervention

July 14, 2018 04:55 PM

The libutf8proc package prior to version 2.1.1-3 had an incorrect soname link. This has been fixed in 2.1.1-3, so the upgrade will need to overwrite the untracked soname link created by ldconfig. If you get an error

libutf8proc: /usr/lib/ exists in filesystem

when updating, use

pacman -Suy --overwrite usr/lib/

to perform the upgrade.

Antonio Rojas@Official News

Arch Linux at FrOSCon

July 10, 2018 07:37 PM

Yet another shoutout for FrOSCon, which will be held 25th and 26th of August. Arch Linux will have a devroom with talks so far about Linux Pro Audio and our general Infrastructure / Reproducible build.

Thanks to Stickermule there will be Arch Linux sticker to hand out. Stickermule

Jelle van der Waa ( Van der Waa

Arch monthly June

July 02, 2018 07:37 PM

Archive cleanup

The Arch Archive has been cleaned up, the discussion started in this mail thread. The archive server was running out of space and therefore needed some cleaning, all packages which are not required for reproducible builds where removed (and where from 2013/2014/2015). Packages from these years should also be available at the internet archive.


There will be an Arch Linux Devroom on the Sunday of FrOSCon with talks and the possibility to meet members of the team.

Python2 modules cleanup

A proposal has been send out to remove 'orphan' python2 modules. As a start of phasing out python2 packages.

Package guidelines improvements

Foxboron proposed improving the package guidelines.

Core/extra cleanup

Core and extra has been cleaned up a bit, removed packages where pcmciautils, speedtouch and zd1211-firmware.

AUR package compromised

As expected from the AUR, anyone can upload a package or adopt one and change it. This happened to acroread on Sunday and some other packages, always review packages you build from the AUR before building.

Jelle van der Waa ( Van der Waa

Arch monthly May

June 02, 2018 07:37 PM

Pacman release

Finally! A new pacman release, this version adds some critical bits for reproducible builds and the pacman repository has been shed of misc tools which are now in pacman-contrib. More details in the changelog and on reddit


For reproducible builds, every package in the repository build on a users system should create exactly the same package as the repository package. To be able to achieve this the packages which where installed in build chroot are recorded in a BUILDINFO file (man BUILDINFO) which is added in the .pkg.tar.xz package. BUILDINFO files where added a while ago in pacman, but not every package contains them yet! Interestingly enough even a rolling release distro contains packages from 2013, these are now being rebuild! This also ties in to the cleanup of, since the archive server is almost full and the 2013/2014/2015 directories will be removed. If you have a good network connection and want to mirror the archive, reach out!

pkgconf replaces pkg-config

As can be read on the mailing list, pkgconf has now replaced pkg-config.

GCC 8 in [core]

The latest version of GCC 8 lands in [core], this enables more warnings by default so older packages might fail to build if they enable -Werror.

Jelle van der Waa ( Van der Waa

Pacman-5.1 – Don’t Use the Force, Luke!

May 28, 2018 11:18 PM

Wow… look at all the cobwebs around here! No posts in two years. But the need for a pacman release post has dragged me back. I clearly still remembered the password, so that is a bonus!

As is tradition, before I get in to details, I need to thank everyone for their help in making this release. Here are the top 10 committers:

$ git shortlog -n -s v5.0.0..v5.1.0
    82  Allan McRae
    60  Andrew Gregory
    45  Eli Schwartz
    16  Ivy Foster
    10  Dave Reisner
     9  Christian Hesse
     9  Gordian Edenhofer
     8  Alastair Hughes
     7  Rikard Falkeborn
     6  Michael Straube

(I win!) Lots of new names there which is always really appreciated. And as usual a long tail of contributors submitting the occasional patch – there were 48 contributors in total.

Onto what has changed in this release. There is a lack of what I would call a killer feature in this release. Mostly a lot of small changes that improve usability, which is why there was so much time between releases. Here is a detailed list of changes. However, there are a few things worth highlighting.

There is a new option --overwrite, which is a replacement for to often misused --force (hence the release name). This allows fine grained control of what files pacman is safe to ignore conflicts with. Handling the latest upgrade requiring user intervention in Arch Linux would now look like:
pacman -Syu --overwrite usr/lib/ can even use globs when specifying the files to overwrite. Not only is specifying exact files to overwrite a lot safer than the old --force, there are also some common sense restrictions there too (you can’t overwrite a directory with a file, or force package installs with conflicting files).

We have also added a --sysroot option that will replace --root. Basically, this now works the way people will expect – for example, the configuration file used is the one in the specified root, and not the local one. This does require a bit more setup while creating a new install root, but hopefully will be a lot more robust.

We have also added support for reproducible builds. This was mostly ensuring all files had the same timestamp and obeyed the SOURCE_DATE_EPOCH standard. We also added a .BUILDINFO file within each package, recording information about the environment a package was built in. This allows scripts to regenerate the build environment to demonstrate a package is reproducible (particularly important in rolling release distros).

There was also improved support for debugging packages. Split packages now produce a single debug package instead of one for each split package. This makes it easier to get all required debug symbols for a particular package (and hopefully easier for distros to carry these packages…). Also, we include relevant source files in the debug packages, allowing us to step through the code.

Finally, I killed off the “contrib” directory as it was taking excessive amounts of pacman developer time. That means no more checkupdates, paccache, … However, this has been picked up as a separate project, which is available by installing pacman-contrib in Arch Linux.

As always, this is a bug free release. But if you spot something you think is a bug, please file a bug report and we can assign blame – which is more important than fixing! (The pool for developer who created the first pacman bug of this release is still open at the time of posting.)

Allan@Allan McRae

IWD: the new WPA-Supplicant Replacement

May 13, 2018 08:08 PM

I just want to inform you all that I have pushed IWD version 0.3 into community.
IWD is a new wireless daemon and aims to replace wpa_supplicant in the future.
I have created a first wikipage for the package as well here:

IWD comes with a more secure approach. It doesn't use OpenSSL or GnuTLS. Instead it uses different Kernel functions for cryptographic operations.

If you want to know more you can checkout this video here:

Shibumi@Forum Announcements

js52 52.7.3-2 upgrade requires intervention

May 04, 2018 08:27 PM

Due to the SONAME of /usr/lib/ not matching its file name, ldconfig created an untracked file /usr/lib/ This is now fixed and both files are present in the package.

To pass the upgrade, remove /usr/lib/ prior to upgrading.

Jan Alexander Steffens@Official News

glibc 2.27-2 and pam 1.3.0-2 may require manual intervention

April 20, 2018 07:45 AM

The new version of glibc removes support for NIS and NIS+. The default /etc/nsswitch.conf file provided by filesystem package already reflects this change. Please make sure to merge pacnew file if it exists prior to upgrade.

NIS functionality can still be enabled by installing libnss_nis package. There is no replacement for NIS+ in the official repositories.

pam 1.3.0-2 no longer ships pam_unix2 module and pam_unix_*.so compatibility symlinks. Before upgrading, review PAM configuration files in the /etc/pam.d directory and replace removed modules with Users of pam_unix2 should also reset their passwords after such change. Defaults provided by pambase package do not need any modifications.

Bartłomiej Piotrowski@Official News


February 23, 2018 08:34 PM

Solving Battleships with SAT
Kyle Keen

zita-resampler 1.6.0-1 -> 2 update requires manual intervention

February 22, 2018 07:57 AM

The zita-resampler 1.6.0-1 package was missing a library symlink that has been readded in 1.6.0-2. If you installed 1.6.0-1, ldconfig would have created this symlink at install time, and it will conflict with the one included in 1.6.0-2. In that case, remove /usr/lib/ manually before updating.

Antonio Rojas@Official News

Arch monthly January

February 06, 2018 12:37 PM

Arch Linux @ FOSDEM

Arch Linux Trusted Users, Developers and members of the Security team have been at FOSDEM. Next year there will be more stickers hopefully and maybe a talk, but it was great to meet some Arch users in real life, discuss and even hack on the Security Tracker.

TU Application: Ivy Foster

A new TU applied, you can read the sponsorship here.

New DevOps member Phillip Smith

A new member joined the sysadmin/devops team. This is the team which maintains the Arch infrastructure such as the forums, AUR and wiki.

Jelle van der Waa ( Van der Waa

Arch monthly December

January 01, 2018 12:37 PM

Arch Linux @ 34C3

Arch Linux Trusted Users, Developers and members of the Security team have been at 34C3 and even held a small meetup. There was also an assembly where people from the irc channel could meet each other. Seeing how much interest there was this year, it might be worth it to host a self organized session or assembly with more stickers \o/

Fosdem 2018

Arch Linux Trusted Users and Developers will be at Fosdem 2018 in February. We don't have a booth or developer room but you can probably find us by looking for Arch stickers or hoodies :-)

2017 Repository cleanup

The repository's will be cleaned of orphan packages, which will be moved to the AUR, where they can be picked up and taken care of.

AUR 4.6.0 Release

A new version of aurweb has been released on December third. It brings markdown support for comments and more Trusted User specific changes.

Happy 2018!

I wish everyone a happy 2018 and keep on rolling :)

Jelle van der Waa ( Van der Waa

Arch monthly November

December 08, 2017 11:11 AM

New TU Andrew Crerar

Andrew Crerar applied to become a Trusted User and was accepted! Congratulations! His intentions is to move firefox-develop from the AUR to [community]

77% Reproducible packages

Currently 77% of the packages are reproducible, note that we do not vary everything yet in the two builds. For example filesystem, build patch and other options can be varied.

Pro-audio mailing list

For audio enthusiasts there is a new mailing list to discuss audio packaging, development and usage etc..

GCC and GCC-multilib merged

Now that 32 bit support is dropped, the normal GCC package has gained support to build multilib packages, simplifying packaging.

Mime-types replaced with mailcap

Mime-types is now replaced by mailcap in this change.

Arch Linux at 34C3

A few Arch Linux Developers and Trusted users will be at 34C3 in Leipzig, if you are there, meet us there! A certain Arch user was recruited after talks at congress!

Analysis of AUR and Official Arch Repository data

Brian Caffey has made some a analysis of the AUR and the Arch repositories.

Jelle van der Waa ( Van der Waa

Writing text in Unity

December 04, 2017 06:40 AM

Writing text in Unity isn't that easy, at least if you want to generate text with single gameobjects to be displayed in 3D and not only as a flat UI text. Every single letter must be dragged and dropped to its place to form a word.

Personally I was frustrated and didn't find a solution on the internet, so I decided to write this small script on my own which generates text in the editor while you are typing as you can see in the animation. For this example I used the letters from the Unity Asset Store package "Simple Icons - Cartoon Assets" (!/content/59925). Maybe you will find this helpful or has more ideas to improve it, if so then please let me know.😉

A detailed description how to use this small script can be found in my repository on GitHub (
ise ( Isenmann

Start VR development with the right toolkit

November 29, 2017 12:28 AM

More or less one year ago I have started with developing in VR and tried several plugins and tools in Unity to implement all the basic stuff like teleporting, grabbing, triggering and so on.

The first thing you will probably try or see in Unity is the official SteamVR plugin from Valve if you have a Vive like me. Basically this SDK has everything you will need to realize your project, but it's really a very hard start if you try several things with the SteamVR SDK directly. There is no much documentation or example scenes where you can have a look at. For me it was more frustrating than helpful, so searching the Unity Asset Store and also the internet you will maybe find the VRTK (Virtual Reality Toolkit) from Harvey Ball (aka TheStoneFox).

It's the best toolkit you can get if you are try to do something in Unity for VR. You get tons of documentations for nearly all use cases you can imagine. Furthermore he has created lots of examples where you find nearly all that stuff sorted and splitted in different scenes to show you how to use them. Every single script he has written is licensed under the MIT license and can be studied directly in the SDK or in his GitHub repository for the toolkit.

A very active community is discussing stuff at its own Slack channel. Even a Youtube channel exists where he posts tutorials or doing a live Q&A session to answer your questions.

But why do I tell you this? Because Harvey Ball has done an absolute astonishing job on this toolkit. You don't have to fiddle around with each single SDK for each vendor, you don't have to think about grabbing an object, teleporting around, using an object, realize a button, realizing an usable door or anything you can think of. You can such use the VRTK and use nearly all available VR headsets out there directly and start to code your idea or project right away.

And the best thing, he has decided right from the beginning that he give this away all for free! This is even more astonishing if you see all the effort which is behind such a toolkit. These are all reasons to give something back to Harvey. How? You can decide to become a patreon on his patreon page, start contributing directly in the toolkit if you are a coder, contact Harvey and ask how you can help. Right now he is really looking for some donations to go on with the development of VRTK. It would be a shame if VRTK is dying just because of the lack of support. So give and show him a little bit of love and help for this really useful and totally necessary toolkit for VR development in Unity!

Look at all the links I have posted here and decide on your own how important this project is.
ise ( Isenmann

Reproducible Arch Linux?!

November 26, 2017 12:37 PM

The reproducible build initiative has been started a long time ago by Debian and has been grown to include more projects. Arch is now also in the process of getting reproducible build support, thanks to the of hard work of Anthraxx, Sangy, and many more volunteers. In pacman git patches where landed to support reproducible builds which will be included in a hopefully soon next stable release! Meanwhile with help of the rebuild infrastructure rebuilds have been started!

Currently 77% of the 17% tested packages are reproducible as can be found here. This page is fed by the work done by two Jenkins builders, which currently build the whole Arch repository.

The builder builds the package twice in different environments and then uses diffoscope to find differences in packages. Usually the differences are due to timestamps :-). Now that we have some results of rebuilds, we can start fixing our packages. The work I did so far:

  • Fixing 404 sources of our packages, some of the source failures where due to being used and not

    This has been fixed in SVN. Also old pypi links needed to be fixed

  • One package's .install file contained a killall statement, I'm not sure why but it shouldn't be required so it was eradicated

  • Integrity mismatch, so upstream did a ninja re-release, annoying but fixed

  • Imagemagick's convert sets some metadata in the resized png's which makes reproducible builds fail. Since it does not adhere to SOURCE_DATE_EPOCH.

  • Missing checkdepends on pytest-runner, which is automatically downloaded by the build tools but that failed in the reproducible build. Some simply adding the depdency to checkdepends fixed it.

As you can see, only one of the bullet points was really an reproducible build issue the others where packaging issues. So I can conclude that reproducible builds will increase the packaging quality in the Arch repository. Having the packages in our repository always build-able will also help the Arch Linux 32 project.

The Arch reproducible project still needs a lot of work, to make it possible to verify a package build as a user against the repository package.

P.S.: If you are at 34C3 this year and interested, visit the reproducible build assembly.

Jelle van der Waa ( Van der Waa

Using the WRLD Unity SDK with a stencil mask object

November 24, 2017 09:32 AM

Maybe you have heard about the great WRLD project, which provides a great way to display real world map data in your project. Furthermore they provide several different SDKs to access these datas. 

For a small project I needed some map data to visualize them in a 3D scene inside Unity. Using the Unity SDK from WRLD you can easily access those data and they will be displayed in your scene. Sadly they render the map all over your scene and their is no restriction in size. At least I haven't found any, even if you use their script attached to a GameObject with a specific size, the map will be displayed all over the scene. 

After some fails and searching the web I stumbled upon a video showing the usage of WRLD in a AR environment. There they do exactly what I needed. Luckily the video was made by WRLD and they also provided two very good blog posts where they explained how they have done it. With the help of these blog posts I have implemented it without all the AR stuff and came up with the proof of concept you can see in the animation. 

The displayed cube is used as a stencil mask for the map and if you move the cube or the map, only the part of the map which is inside the cube will be rendered. Also new tiles of the map are loaded dynamically depending on the main camera in the scene. I have published the Unity project on GitHub to provide the solution ready to use for your project and also to archive it for myself. You will need a valid API key from WRLD, just register at their website and generate one for your needs. Then insert your API key at the WRLD Map GameObject:

The project includes the WRLD Unity SDK which you also find in the Asset Store of Unity. But be careful if you replace the included one with the official one from the store, because I have done some changes they have mentioned in their blog posts. So make sure to apply the code changes if you replace the integrated WRLD SDK.

Hope you will find it useful. If you find a bug or have useful hints then let me know, because I'm quite new in Unity and thankful for anything related to it. 
ise ( Isenmann

Arch monthly October

November 11, 2017 10:11 AM

This is the second edition of Arch monthly, mostly due to the lack of time to work on Arch weekly. So let's start with the roundup of last month.

New TU David Runge

David Runge applied to become a Trusted User and was accepted! He mentioned to have a huge interest in pro-audio, so hopefully there will be improvements made in that area!

Farewell 32 bit

After nine months of deprecation period, 32 bit is now unsupported on Arch Linux. For people with 32 bit hardware there is the Arch Linux 32 project which intends to keep 32 bit support going.

AUR Changes Affecting Your Privacy

The next aurweb release, which will be released on 2017-12-03, includes a public interface to obtain a list of user names of all registered users. This means that, starting on 2017-12-03, your user name will be visible to the general public. The user name is the account name you specified when registering, and it is the only information included in this list. See this link for more information.

#archlinux-testing irc channel

An irc channel has been created for coordination between Arch Linux testers. See more about becoming an official tester here.

Jelle van der Waa ( Van der Waa

The end of i686 support

November 08, 2017 01:39 PM

Following 9 months of deprecation period, support for the i686 architecture effectively ends today. By the end of November, i686 packages will be removed from our mirrors and later from the packages archive. The [multilib] repository is not affected.

For users unable to upgrade their hardware to x86_64, an alternative is a community maintained fork named Arch Linux 32. See their website for details on migrating existing installations.

Bartłomiej Piotrowski@Official News

Testing your salt states with kitchen-salt

October 04, 2017 04:17 PM

What is Kitchen and why would someone use it.

test-kitchen was originally written as a way to test chef cookbooks. But the provisioners and drivers are pluggable, kitchen-salt enables salt to be the provisioner instead of chef.

The goal of this kitchen-salt is to make it easy to test salt states or formulas independently of a production environment. It allows for doing quick checks of states and to make sure that upstream changes in packages will not affect deployments. By using platforms, users can run checks on their states against the environment they are running in production as well as checking future releases of distributions before doing major upgrades. It is also possible to test states against multiple versions of salt to make sure there are no major regressions.

Example formula

This article will be using my wordpress-formula to demo the major usage points of kitchen-salt.

Installing Kitchen

Most distributions provide a bundler gem in the repositories, but there are some that have a version of ruby that is too old to use kitchen. The easiest way to use kitchen on each system is to use a ruby version manager like rvm or rbenv. rbenv is very similar to pyenv.

Once ruby bundler is installed, it can be used to install localized versions of the ruby packages for each repository, using the bundle install command.

$ bundle install
The latest bundler is 1.16.0.pre.2, but you are currently running 1.15.4.
To update, run `gem install bundler --pre`
Using artifactory 2.8.2
Using bundler 1.15.4
Using mixlib-shellout 2.3.2
Using mixlib-versioning 1.2.2
Using thor 0.19.1
Using net-ssh 4.2.0
Using safe_yaml 1.0.4
Using mixlib-install 2.1.12
Using net-scp 1.2.1
Using net-ssh-gateway 1.3.0
Using test-kitchen 1.17.0
Using kitchen-docker 2.6.1.pre from (at master@9eabd01)
Using kitchen-salt 0.0.29
Bundle complete! 3 Gemfile dependencies, 13 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.

This will require having a separate Gemfile to hold the requirements for running test-kitchen.

source ""

gem "test-kitchen"
gem "kitchen-salt"
gem 'kitchen-docker', :git => ''

Because I am also testing opensuse, right now the git version of kitchen-docker is required.

Using kitchen

$ bundle exec kitchen help
  kitchen console                                 # Kitchen Console!
  kitchen converge [INSTANCE|REGEXP|all]          # Change instance state to converge. Use a provisioner to configure one or more instances
  kitchen create [INSTANCE|REGEXP|all]            # Change instance state to create. Start one or more instances
  kitchen destroy [INSTANCE|REGEXP|all]           # Change instance state to destroy. Delete all information for one or more instances
  kitchen diagnose [INSTANCE|REGEXP|all]          # Show computed diagnostic configuration
  kitchen driver                                  # Driver subcommands
  kitchen driver create [NAME]                    # Create a new Kitchen Driver gem project
  kitchen driver discover                         # Discover Test Kitchen drivers published on RubyGems
  kitchen driver help [COMMAND]                   # Describe subcommands or one specific subcommand
  kitchen exec INSTANCE|REGEXP -c REMOTE_COMMAND  # Execute command on one or more instance
  kitchen help [COMMAND]                          # Describe available commands or one specific command
  kitchen init                                    # Adds some configuration to your cookbook so Kitchen can rock
  kitchen list [INSTANCE|REGEXP|all]              # Lists one or more instances
  kitchen login INSTANCE|REGEXP                   # Log in to one instance
  kitchen package INSTANCE|REGEXP                 # package an instance
  kitchen setup [INSTANCE|REGEXP|all]             # Change instance state to setup. Prepare to run automated tests. Install busser and related gems on one or more instances
  kitchen test [INSTANCE|REGEXP|all]              # Test (destroy, create, converge, setup, verify and destroy) one or more instances
  kitchen verify [INSTANCE|REGEXP|all]            # Change instance state to verify. Run automated tests on one or more instances
  kitchen version                                 # Print Kitchen's version information

The kitchen commands I use the most are: - list: show the current state of each configured environment - create: create the test environment with ssh or winrm. - converge: run the provision command, in this case, salt_solo and the specified states - verify: run the verifier. - login: login to created environment - destroy: remove the created environment - test: run create, converge, verify, and then destroy if it all succeeds

For triaging github issues, I regularly use bundle exec kitchen create <setup> and then salt bootstrap to install the salt version we are testing.

Then for running tests, to setup the environment I want to run the tests in I run bundle exec kitchen converge <setup>

Configuring test-kitchen

There are 6 major parts of the test-kitchen configuration file. This is .kitchen.yml and should be in the directory inside of which the kitchen command is going to be run.

  • driver: This specifies the configuration of how the driver requirements. Drivers are how the virtual machine is created. kitchen drivers (I prefer docker)
  • verifier: The command to run for tests to check that the converge ran successfully.
  • platforms: The different platforms/distributions to run on
  • transport: The transport layer to use to talk to the vm. This defaults to ssh, but winrm is also available.
  • suites: sets of different test runs.
  • provisioner: The plugin for provisioning the vm for the verifier to run against. This is where kitchen-salt comes in.

For the driver on the wordpress-fomula, the following is set:

  name: docker
  use_sudo: false
  privileged: true
    - 80

This is using the kitchen-docker driver. If the user running kitchen does not have the correct privileges to run docker, then use_sudo: true should be set. All of the containers that are being used here are using systemd as the exec command, so privileged: true needs to be set. And then port 80 is forwarded to the host so that the verifier can run commands against it to check that wordpress has been setup

For the platforms, the following are setup to run systemd on the container start.

  - name: centos
      run_command: /usr/lib/systemd/systemd
  - name: opensuse
      run_command: /usr/lib/systemd/systemd
        - systemctl enable sshd.service
  - name: ubuntu
      run_command: /lib/systemd/systemd
  - name: debian
      run_command: /lib/systemd/systemd

All of these distributions except for opensuse have sshd.service enabled when the package is installed, so we only have to have one provision command to enable sshd for opensuse. The rest have a command to configure the driver run_command to the correct systemd binary for that distribution.

For suites, there is only one suite.

  - name: wordpress

If multiple sets of pillars or different versions of salt were needed to be tested, they would be configured here.

  - name: nitrogen
  - name: develop
      salt_bootstrap_options: -X -p git -p curl -p sudo git develop

And there would be multiple suites with for each platform created and tested.

And lastly for the verifier.

  name: shell
  remote_exec: false
  command: pytest -v tests/integration/

There are a couple base verifiers. I usually use the shell verifier and use testinfra which has multiple connectors to run pytest type test functions inside of the container.

Kitchen also has a $KITCHEN_SUITE variable that it sets, so if different tests files need to be run for each suite.

  name: shell
  remote_exec: false
  command: pytest -v tests/integration/$KITCHEN_SUITE

For the salt-jenkins, since we are setting up the containers to run the SaltStack testing suite, the verifier is setup to run inside of the container, and run the salt testing suite.

  name: shell
  remote_exec: true
  command: '$(kitchen) /testing/tests/ -v --output-columns=80 --run-destructive<%= ENV["TEST"] ? " -n #{ENV["TEST"]}" : "" %>'

remote_exec will cause the command to be run inside of the container. The kitchen command uses the installed salt to lookup if py3 was used or not, so that the correct python executable is used to run the test suite. Then if the TEST environment variable is set, that test is run, otherwise the full test suite is run.

Configuring kitchen-salt

The documentation for kitchen-salt is located here

  name: salt_solo
  salt_install: bootstrap
  salt_version: latest
  salt_bootstrap_options: -X -p git -p curl -p sudo
  is_file_root: true
  require_chef: false
    - .circleci/
    - Dockerfile
    - .drone.yml
    - .git/
    - .gitignore
    - .kitchen/
    - .kitchen.yml
    - Gemfile
    - Gemfile.lock
    - requirements.txt
    - tests/
    - .travis.yml
    - name: apache
      repo: git
    - name: mysql
      repo: git
    - name: php
      repo: git
        - wordpress
          - wordpress
          - wordpress
            password: quair9aiqueeShae4toh
            host: localhost
              - database: wordpress
                  - all privileges
          admin_user: gtmanfred
          title: "GtManfred's Blog"
  • name: The name of the provisioner is salt_solo
  • salt_install: This defaults to bootstrap which installs using the salt bootstrap. Other options are apt and yum which use the repository. ppa allows for specifying a ppa from which to install salt. And distrib which just uses whatever version of salt is provided by the distribution repositories.
  • salt_bootstrap_options: These are the bootstrap options that are passed to the bootstrap script. -X can be passed here to not start the salt services, because salt_solo runs salt-call and doesn't use the salt-minion process.
  • is_file_root: This is used to say just copy everything from the current directory to the tmp fileserver in the kitchen container. If there were not a custom module and state for this formula, kitchen could be set to have formula: wordpress to copy the wordpress-formula to the kitchen environment.
  • salt_copy_filter: This is a list of files to not copy to the kitchen environment.
  • dependencies: This is the fun part. If the formula depends on other formulas, they can be configured here. The following types are supported:
    • path - use a local path
    • git - clone a git repository
    • apt - install an apt package
    • yum - install a yum package
    • spm - install a spm package
  • state_top: This is the top file that will be used to run at the end of the provisioner
  • pillars: This is a set of custom pillars for configuring the instance. There are a couple other ways to provide pillars that are also useful.

Running test kitchen on pull requests.

Any of the major testing platforms should be usable. If there are complicated setups needed, Jenkins is probably the best, unfortunately I do not know jenkins very well, so I have provided examples for the three I know how to use.

My personal favorite is Drone. You can setup each one of the tests suites to run with a mysql container if you did not have states that need mysql-server installed on the instance. Also, for each job runner for Drone, you just need to setup another drone-agent on a server running docker, and then hook it into the drone-server, then each drone-agent can pick up a job and run it.

@author@@Daniel Wallace

Arch monthly September

October 02, 2017 08:00 PM

This is the first edition of Arch monthly, mostly due to the lack of time to work on Arch weekly. So let's start with the roundup of last month.

Two new Trusted Users

Alad and Foxboron joined the Trusted Users team! Congrats!

Archweb signoff helper

This has been around for a while, but Foxboron created this great tool to signoff packages in [testing] simply from the cli. If you are an official tester try it out!

Arch Classroom - Python for beginners

Pulec organizes a classroom about Python for beginners on Wednesday, October 04, 2017 at 16:00 UTC in the channel #archlinux-classroom on the freenode network. See this post for more details.

Eli Schwartz is our new bugwrangler

Eli Schwartz joins as bugwrangler by helping out assigning and investigation new bugs.

Arch-meson wrapper

If you package packages which uses meson as a build tool then the arch-meson is useful since it sets defaults for Arch.

Arch manpages website

A new website popped up which hosts Arch manual pages.

Jelle van der Waa ( Van der Waa

Arch Linux Vagrant boxes

September 08, 2017 05:36 PM

Hello everybody,
I am pleased to announce that we provide official Arch Linux Vagrant boxes now: … 2017.09.07

URL to the project:

Shibumi@Forum Announcements

Perl library path change

September 02, 2017 11:44 AM

The perl package now uses a versioned path for compiled modules. This means that modules built for a non-matching perl version will not be loaded any more and must be rebuilt.

A pacman hook warns about affected modules during the upgrade by showing output like this:

WARNING: '/usr/lib/perl5/vendor_perl' contains data from at least 143 packages which will NOT be used by the installed perl interpreter.
 -> Run the following command to get a list of affected packages: pacman -Qqo '/usr/lib/perl5/vendor_perl'

You must rebuild all affected packages against the new perl package before you can use them again. The change also affects modules installed directly via CPAN. Rebuilding will also be necessary again with future major perl updates like 5.28 and 5.30.

Please note that rebuilding was already required for major updates prior to this change, however now perl will no longer try to load the modules and then fail in strange ways.

If the build system of some software does not detect the change automatically, you can use perl -V:vendorarch in your PKGBUILD to query perl for the correct path. There is also sitearch for software that is not packaged with pacman.

Florian Pritz@Official News

Responsive views for Forums and Wiki

August 30, 2017 05:34 AM


for the last few days I had been working on a more mobile friendly
view of our Wikis and Forums. These have just been deployed. Here is
what changed:

While it's not perfect on small screens it should at least be way more
readable on your mobile phones. Let me know of any issues though.

* Updated to MediaWiki 1.29.1
* Removed our fork of the MonoBook skin
* Introduce a new "ArchLinux" extension which injects some styles and
our navigation bar independent of the skin. This was quite a lot of
work to figure out, but future updates should be way easier now.
* The default skin is Vector; MonoBook is still available and can be
enabled in your personal settings
* The MobileFrontend extension has been removed (So we have a branded
view for mobile as well)
* PR see

* Created a github repo at
* PR at
* Some docker compose configuration to simplify development (similar
to the on in the wiki)

In addition to this I have been working on a re-implementation of Part of this is a new more mobile friendly
design. Especially the navigation which moves the menu entries into a
so called Hamburger menu on smaller screens is still missing from the
implementation mentioned above.

I plan to extract these "somehow" so we can use a common navigation in
all our websites. At least a generated snippet we can copy into our



Pierre@Forum Announcements

Till Dawn - first pre-alpha version available (VR zombie shooter)

August 23, 2017 07:13 AM

Finally we have released our first pre-alpha version of our newly project Till Dawn. Till Dawn is a VR zombie survival shooter. Tested with the HTC Vive, but the Oculus Rift is also supported, but untested.

You can download the game on the game page at or at gamejolt and you will also find all information about the game on these pages. At the moment, it's for free but you can donate as much as you want to support us.

So grab a free copy, connect your HTC Vive, play it and let us know what you think about it. Feedback is always welcome, so if you have some ideas for the game then let us know. But remember it's under development right now, so make sure to read the description and the known issues.


Here are some screenshots of the game, but to get a better impression you have to play it.

It's only available for Windows right now, because SteamVR and Unity3D under Linux is a little bit more complicated. Sorry to all Linux gamers out there, but as soon as it is supported without errors, we will release it for Linux, too.

If you like the game then share the information, tweet about it (don't forget to mention @devplayrepeat), make a blog post or just tell your friends. To follow the development make sure that you regularly check the page of the game because we will post news there. 

ise ( Isenmann

Arch weekly #2

May 26, 2017 09:00 AM

This is the second edition of Arch weekly, a small weekly post about the news in the Arch Linux community.

Official docker image for Arch Linux!

After reporting about the Arch-boxes project last week. Pierres created the Arch Linux organization on Docker and created a base image. The docker build script can be found here. Now you can easily run Arch in docker with a base (regularly updated) image!

docker run -ti archlinux/base /bin/bash

pyalpm 0.8.1 release

A bugfix release for pyalpm, has been made it fixes one memory leak, removes some unused code and contains some build fixes.

Archweb upgrade

Archweb has been upgraded to 1.8 LTS, previously it was running on 1.7 which is no longer supported. If you encounter any issues on please report them on the bugtracker.

MariaDB upgrade important news

There are plans to update MariaDB to 10.2.6, this will change the library soname from to and some dependency changes, more details are in the link.

New Trusted User foxxx0

Thore Bödecker joins the TU team, you can read his application here.

Discussion about improving the overall experience of contributors

Bartłomiej has started a discussion on arch-dev-public about improving and getting more external contributors involved in Arch Linux. Not only could existing Arch projects such as pyalpm, archweb and namcap use more contributors for development of new features and fixing bugs. Arch could also use more contributors for new projects and ideas such as rebuild automation and the maintenance of our infrastructure. For those wondering what the infrastructure is about, Arch has a few dedicated servers for the forums, building packages, etc. all these servers are managed with ansible with the playbooks in git

Security updates of the week

The following packages received security updates:

Jelle van der Waa ( Van der Waa

Arch weekly #1

May 17, 2017 06:00 PM

This is the first edition of Arch weekly, a small weekly post about the news in the Arch Linux community. Hopefully this will be a recurring weekly blog post!

linux-hardened appears in [community]

After the disappearance of linux-grsec from the repos due to the Grsecurity project not providing the required patches. Daniel Micay provides an alternative linux-hardened in [community]. The package is based on the following Linux fork which contains more security patches than in the Linux mainline kernel and enables more security configuration options by default such as SLAB_FREELIST_RANDOM.

More information can be found on the wiki of the project.

Arch-boxes project

An effort has been made by Shibumi to provide official Arch Linux docker, vagrant (and maybe ec2) images. Currently there is a virtualbox and qemu/libvirt option. View the project here.

Qt 4 now depends on OpenSSL 1.1

Even after the enormous OpenSSL 1.1 rebuild, not every package in the repository uses OpenSSL 1.1 yet. Qt 4 currently in [extra] uses OpenSSL 1.1 with 27 packages left in the repository which depend on openssl-1.0. Other OpenSSL 1.0 depending packages are now being rebuilt to stay compatible with Debian Stable and non-free software. See this bug report for more information.

Boost 1.64 rebuild

Currently a rebuild is underway, will land in [testing] soon (tm).

[pacman-dev] Repository management discussion

Allan started a discussion on improving the current repository management tooling in pacman. Feedback and patches are welcome :)

GCC 7.1 hits [testing]

GCC 7.1 has landed in [testing], please test it and reports issues!

Security updates of the week

There are quite a lot of security advisories, you can view them here.

Jelle van der Waa ( Van der Waa

Deprecation of ABS tool and rsync endpoint

May 15, 2017 10:55 AM

Due to high maintenance cost of scripts related to the Arch Build System, we have decided to deprecate the abs tool and thus rsync as a way of obtaining PKGBUILDs.

The asp tool, available in [extra], provides similar functionality to abs. asp export pkgname can be used as direct alternative; more information about its usage can be found in the documentation. Additionally Subversion sparse checkouts, as described here, can be used to achieve a similar effect. For fetching all PKGBUILDs, the best way is cloning the svntogit mirrors.

While the extra/abs package has been already dropped, the rsync endpoint (rsync:// will be disabled by the end of the month.

Bartłomiej Piotrowski@Official News

How my car insurance exposed my position

May 11, 2017 12:00 AM

As many car insurances companies do, my car insurance company provides a satellite device that can be put inside your car to provide its location at any time in any place.

By installing such device in your car, the car insurance profiles your conduct, of course, but it could also help the police in finding your car if it gets stolen and you will probably get a nice discount over the insurance price (even up to 40%!). Long story short: I got one.

Often such companies also provide an “App” for smartphones to easily track your car when you are away or to monitor your partner…mine (the company!) does.

Then I downloaded my company’s application for Android, but unluckily it needs the Google Play Services to run. I am a FLOSS evangelist and, as such, I try to use FLOSS apps only and without gapps.

Luckily I’m also a developer and, as such, I try to develop the applications I need most; using mitmproxy, I started to analyze the APIs used by the App to write my own client.


As soon as the App starts you need to authenticate yourself to enable the buttons that allow you to track your car. Fair enough.

The authentication form first asks for your taxpayer’s code; I put mine and under the hood it performs the following request:

curl -X POST -d 'BLUCS§<taxpayers_code>§-1' http://<domain>/BICServices/BICService.svc/restpostcheckpicf<company>

The Web service replies with a cell phone number (WTF?):


Wait. What do we already see here? Yes, besides the ugliest formatting ever and the fact the request uses plain HTTP, it takes only 3 arguments to get a cell phone number? And guess what? The first one and the latter are two constants. In fact, if we put an inexistent taxpayer’s code, by keeping the same values, we get:


…otherwise we get a cell phone number for the given taxpayer’s code!

I hit my head and I continued the authentication flow.

After that, the App asks me to confirm the cell phone number it got is still valid, but it also wants the password I got via mail when subscribing the car insurance; OK let’s proceed:

curl -X POST -d 'BLUCS§<taxpayers_code>§<device_imei>§<android_id>§<device_brand>-<device_model>_unknown-<api_platform>-<os_version>-<device_code>§<cell_phone_number>§2§<password>§§-1' http://<domain>/BICServices/BICService.svc/restpostsmartphoneactivation<company>

The Web service responds with:


The some_code parameter changes everytime, so it seems to work as a “registration id”, but after this step the App unlocked the button to track my car.

I was already astonished at this point: how the authentication will work? Does it need this some_code in combination with my password at reach request? Or maybe it will ask for my taxpayer code?

Car tracking

I start implementing the car tracking feature, which allows to retrieve the last 20 positions of your car, so let’s analyze the request made by the App:

curl -X POST -d 'ASS_NEW§<car_license>§2§-1' http://<domain>/BICServices/BICService.svc/restpostlastnpositions<company>

The Web service responds with:

0§20§<another_code>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>

WTH?!? No header?!? No cookie?!? No authentication parameters?!?

Yes, your assumption is right: you just need a car license and you get its last 20 positions. And what’s that another_code? I just write it down for the moment.

It couldn’t be real, I first thought (hoped) they stored my IP somewhere so I’m authorized to get this data now, so let’s try from a VPN…oh damn, it worked.

Then I tried with an inexistent car license and I got:


which means: “that car license is not in our database”.

So what we could get here with the help of crunch? Easy enough: a list of car licenses that are covered by this company and last 20 positions for each one.

I couldn’t stop now.

The Web client

This car insurance company also provides a Web client which permits more operations, so I logged into to analyze its requests and while it’s hosted on a different domain, and it also uses a cookie for almost any request, it performs one single request to the domain I previously used. Which isn’t authenticated and got my attention:

curl http://<domain>/<company>/(S(<uuid>))/NewRemoteAuthentication.aspx?RUOLO=CL&ID=<another_code>&TARGA=<car_license>&CONTRATTO=<foo>&VOUCHER=<bar>

This one replies with an HTML page that is shown in the Web client:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >
    <meta name="GENERATOR" Content="Microsoft Visual Studio .NET 7.1" />
    <meta name="CODE_LANGUAGE" Content="C#" />
    <meta name="vs_defaultClientScript" content="JavaScript"/>
    <meta name="vs_targetSchema" content="" />
        <!--<meta content="IE=EmulateIE10" name="ie_compatibility" http-equiv="X-UA-Compatible" />-->
        <meta name="ie_compatibility" http-equiv="X-UA-Compatible" content="IE=7, IE=8, IE=EmulateIE9, IE=10, IE=11" />
    <form name="Form1" method="post" action="/<company>/(S(<uuid>))/NewRemoteAuthentication.aspx?RUOLO=CL&amp;ID=<another_code>&amp;TARGA=<car_license>" id="Form1">
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwULLTIwNzEwODIsJFNAgEPKAJDIeBsdSpc2libGVnZGRic5McHC9+DqRx0H+jRt5O+/PLtw==" />

            <iframe id="frm1" src="NewRicerca.aspx" width="100%" height="100%"></iframe>

<SCRIPT language="JavaScript">
// -->

It includes an iframe (sigh!), but that’s the interesting part!!! Look:

Car history

From that page you get:

  • the full name of the person that has subscribed the insurance;
  • the car model and brand;
  • the total amount of kilometers made by the car;
  • the total amount of travels (meant as “car is moving”) made by the car;
  • access to months travels details (how many travels);
  • access to day travels details (latitude, longitude, date and time);
  • access to months statistics (how often you use your car).

Car month history Car day history Car month_statistics

There are a lot of informations here and these statistics are available since the installation of the satellite device.

The request isn’t authenticated so I just have to understand the parameters to fill in. Often not all parameters are required and then I tried by removing someone to find out which are really needed. It turns out that I can simplify that as:

curl http://<domain>/<company>/(S(<uuid>))/NewRemoteAuthentication.aspx?RUOLO=CL&ID=<another_code>&TARGA=<car_license>

But there’s still a another_code there…mmm, wait it looks like the number I took down previously! And yes, it’s!

So, http://<domain>/<company>/(S(<uuid>))/NewRicerca.aspx is the page that really shows all the informations, but how do I generate that uuid thing?

I tried by removing it first and then I got an empty page. Sure, makes sense, how that page will ever know which data I’m looking for?

Then it must be the NewRemoteAuthentication.aspx page that does something; I tried again by removing the uuid from that url and to my full surprise it redirected me to the same url, but it also filled the uuid part as path parameter! Now I can finally invoke the NewRicerca.aspx using that uuid and read all the data!


You just need a car license which is covered by this company to get all the travels made by that car, the full name of the person owning it and its position in real time.

I reported this privacy flaw to the CERT Nazionale which wrote to the company.

The company fixed the leak 3 weeks later by providing new Web services endpoints that use authenticated calls. The company mailed its users saying them to update their App as soon as possible. The old Web services have been shutdown after 1 month and half since my first contact with the CERT Nazionale.

I could be wrong, but I suspect the privacy flaw has been around for 3 years because the first Android version of the App uses the same APIs.

I got no bounty.

The company is a leading provider of telematics solutions.

Andrea Scarpino

First x86_64 TalkingArch

April 07, 2017 05:41 PM

The TalkingArch team is pleased to present the latest version of TalkingArch, available from the usual location. This version features all the latest software, including Linux kernel 4.10.6.

The most important feature of this live image is the new x86_64 only compatibility, removing the i686 compatibility that was present in previous images. This makes the latest version much smaller, but it will no longer work on older i686 machines.

This version is the only one that will be listed on the download page, as it always only includes the latest version. However, anyone needing an image that works on i686 may still download the last dual architecture image either via http or BitTorrent, until i686 is completely dropped from the Arch official repositories later this year. The TalkingArch team is also following the latest information on an i686 secondary port, and if there is enough of a need, and if the build process is as straightforward as the x86_64 version currently available, i686 images may be provided once the port is complete and fully working.


ca-certificates-utils 20170307-1 upgrade requires manual intervention

March 15, 2017 09:27 PM

The upgrade to ca-certificates-utils 20170307-1 requires manual intervention because a symlink which used to be generated post-install has been moved into the package proper.

As deleting the symlink may leave you unable to download packages, perform this upgrade in three steps:

# pacman -Syuw                           # download packages
# rm /etc/ssl/certs/ca-certificates.crt  # remove conflicting file
# pacman -Su                             # perform upgrade
Jan Alexander Steffens@Official News

mesa with libglvnd support is now in testing

February 27, 2017 08:15 PM

mesa-17.0.0-3 can now be installed side-by-side with nvidia-378.13 driver without any libgl/libglx hacks, and with the help of Fedora and upstream xorg-server patches.

  • First step was to remove the libglx symlinks with xorg-server-1.19.1-3 and associated mesa/nvidia drivers through the removal of various libgl packages. It was a tough moment because it was breaking optimus system, xorg-server configuration needs manual updating.

  • The second step is now here, with an updated 10-nvidia-drm-outputclass.conf file that should help to have an "out-of-the-box" working xorg-server experience with optimus system.

Please test this extensively and post your feedback in this forum thread or in our bugtracker.

Laurent Carlier@Official News

Using salt to build docker containers

February 17, 2017 01:09 AM

How docker works now.

When you build a docker container using only docker tools, what you are actually doing is building a bunch of layers. Great. Layers is a good idea. You get to build a bunch of docker images that have a lot of similar layers, so you only have to build the changes when you update containers. But what you end up with is this really hard to read ugly Dockerfile that is hard to leave because it tries to put a bunch of commands on the same line.

# from
FROM php:7.0-apache

RUN a2enmod rewrite

# install the PHP extensions we need
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev libpq-dev \
    && rm -rf /var/lib/apt/lists/* \
    && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
    && docker-php-ext-install gd mbstring opcache pdo pdo_mysql pdo_pgsql zip

# set recommended PHP.ini settings
# see
RUN { \
        echo 'opcache.memory_consumption=128'; \
        echo 'opcache.interned_strings_buffer=8'; \
        echo 'opcache.max_accelerated_files=4000'; \
        echo 'opcache.revalidate_freq=60'; \
        echo 'opcache.fast_shutdown=1'; \
        echo 'opcache.enable_cli=1'; \
    } > /usr/local/etc/php/conf.d/opcache-recommended.ini

WORKDIR /var/www/html

ENV DRUPAL_MD5 57526a827771ea8a06db1792f1602a85

RUN curl -fSL "${DRUPAL_VERSION}.tar.gz" -o drupal.tar.gz \
    && echo "${DRUPAL_MD5} *drupal.tar.gz" | md5sum -c - \
    && tar -xz --strip-components=1 -f drupal.tar.gz \
    && rm drupal.tar.gz \
    && chown -R www-data:www-data sites modules themes

Here is a Dockerfile that is used to build a mysqld container. The RUN command in the middle is just really convuluted and I just can't imagine trying to write a container like this. But what if you could use salt to configure your docker container to do the same thing.

Using Salt States

NOTE: This is all to be added in the Nitrogen release of salt, but you should be able to drop-in the dockerng state and module from develop once this PR is merged.

It is worth mentioning that this is a contrived example, because one of the requirements to use is to have python installed in the docker container. So for the salt example you will need to build a slightly modified parent container using the following command.

docker run --name temp php:7.0-apache bash -c 'apt-get update && apt-get install -y python' && docker commit temp php:7.0-apache-python && docker rm temp

Now, this shouldn't be a problem when building images. This just allows for managing the layers. If I were to do this, I would take the debian image, and use salt states to setup apache and the base stuff for building the modules and things and then run the following state.

Build Drupal Image:
    - name: myapp/drupal
    - base: php:7.0-apache-python
    - sls: docker.drupal

Then this would build the image with my salt://docker/drupal.sls state.

{%- set exts = ('gd', 'mbstring', 'opcache', 'pdo', 'pdo_mysql', 'pdo_pgsql', 'zip') %}
{%- set DRUPAL_VERSION = '8.2.6' %}
{%- set DRUPAL_MD5 = '57526a827771ea8a06db1792f1602a85' %}

enable rewrite module:
    - name: rewrite

install extensions:
    - names:
      - libpng12-dev
      - libjpeg-dev
      - libpq-dev
    - names:
      - docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr:
        - prereq:
          - cmd: docker-php-ext-install gd
      {%- for ext in exts %}
      - docker-php-ext-install {{ext}}:
        - creates: /usr/local/etc/php/conf.d/{{ext}}.ini
      {%- endfor %}

configure opcache:
    - name: /usr/local/etc/php/conf.d/opcache-recommended.ini
    - contents: |

get drupal:
    - name: /var/www/html
    - source:{{DRUPAL_VERSION}}.tar.gz
    - source_hash: md5={{DRUPAL_MD5}}
    - user: www-data
    - group: www-data
    - enforce_toplevel: False
    - options: --strip-components=1

And we are done. In my honest opinion, this is significantly easier to read. First we enable the rewrite module. Then we install the packages for compiling the different php extensions. Then we use the built in docker-php-ext-* to build the different php modules. And we put the opcache recommended plugins in place. Lastly we download and extract the drupal tarball and put it in the correct place.

There is one caveat, right now we do not have the ability to build in the WORKDIR and ENV variables, so those will have to be provided when the container is started.

Start Drupal Container:
    - name: drupal
    - image: myapp/drupal:latest
    - working_dir: /var/www/html

I am going to look into adding those for the dockerng.create command that is used to create the starting container for the sls_build so that they can be saved for the image.

Daniel Wallace@Daniel Wallace

Planet Arch Linux

Planet Arch Linux is a window into the world, work and lives of Arch Linux hackers and developers.

Last updated on December 13, 2019 01:02 PM. All times are normalized to UTC time.