Planet Arch Linux

libutf8proc>=2.1.1-3 update requires manual intervention

July 14, 2018 04:55 PM

The libutf8proc package prior to version 2.1.1-3 had an incorrect soname link. This has been fixed in 2.1.1-3, so the upgrade will need to overwrite the untracked soname link created by ldconfig. If you get an error

libutf8proc: /usr/lib/libutf8proc.so.2 exists in filesystem

when updating, use

pacman -Suy --overwrite usr/lib/libutf8proc.so.2

to perform the upgrade.

Antonio Rojas@Official News

Arch Linux at FrOSCon

July 10, 2018 07:37 PM

Yet another shoutout for FrOSCon, which will be held 25th and 26th of August. Arch Linux will have a devroom with talks so far about Linux Pro Audio and our general Infrastructure / Reproducible build.

Thanks to Stickermule there will be Arch Linux sticker to hand out. Stickermule

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Arch monthly June

July 02, 2018 07:37 PM

Archive cleanup

The Arch Archive has been cleaned up, the discussion started in this mail thread. The archive server was running out of space and therefore needed some cleaning, all packages which are not required for reproducible builds where removed (and where from 2013/2014/2015). Packages from these years should also be available at the internet archive.

FrOSCon

There will be an Arch Linux Devroom on the Sunday of FrOSCon with talks and the possibility to meet members of the team.

Python2 modules cleanup

A proposal has been send out to remove 'orphan' python2 modules. As a start of phasing out python2 packages.

Package guidelines improvements

Foxboron proposed improving the package guidelines.

Core/extra cleanup

Core and extra has been cleaned up a bit, removed packages where pcmciautils, speedtouch and zd1211-firmware.

AUR package compromised

As expected from the AUR, anyone can upload a package or adopt one and change it. This happened to acroread on Sunday and some other packages, always review packages you build from the AUR before building.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Arch monthly May

June 02, 2018 07:37 PM

Pacman release

Finally! A new pacman release, this version adds some critical bits for reproducible builds and the pacman repository has been shed of misc tools which are now in pacman-contrib. More details in the changelog and on reddit

BUILDINFO Rebuild

For reproducible builds, every package in the repository build on a users system should create exactly the same package as the repository package. To be able to achieve this the packages which where installed in build chroot are recorded in a BUILDINFO file (man BUILDINFO) which is added in the .pkg.tar.xz package. BUILDINFO files where added a while ago in pacman, but not every package contains them yet! Interestingly enough even a rolling release distro contains packages from 2013, these are now being rebuild! This also ties in to the cleanup of archive.archlinux.org, since the archive server is almost full and the 2013/2014/2015 directories will be removed. If you have a good network connection and want to mirror the archive, reach out!

pkgconf replaces pkg-config

As can be read on the mailing list, pkgconf has now replaced pkg-config.

GCC 8 in [core]

The latest version of GCC 8 lands in [core], this enables more warnings by default so older packages might fail to build if they enable -Werror.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Pacman-5.1 – Don’t Use the Force, Luke!

May 28, 2018 11:18 PM

Wow… look at all the cobwebs around here! No posts in two years. But the need for a pacman release post has dragged me back. I clearly still remembered the password, so that is a bonus!

As is tradition, before I get in to details, I need to thank everyone for their help in making this release. Here are the top 10 committers:

$ git shortlog -n -s v5.0.0..v5.1.0
    82  Allan McRae
    60  Andrew Gregory
    45  Eli Schwartz
    16  Ivy Foster
    10  Dave Reisner
     9  Christian Hesse
     9  Gordian Edenhofer
     8  Alastair Hughes
     7  Rikard Falkeborn
     6  Michael Straube

(I win!) Lots of new names there which is always really appreciated. And as usual a long tail of contributors submitting the occasional patch – there were 48 contributors in total.

Onto what has changed in this release. There is a lack of what I would call a killer feature in this release. Mostly a lot of small changes that improve usability, which is why there was so much time between releases. Here is a detailed list of changes. However, there are a few things worth highlighting.

There is a new option --overwrite, which is a replacement for to often misused --force (hence the release name). This allows fine grained control of what files pacman is safe to ignore conflicts with. Handling the latest upgrade requiring user intervention in Arch Linux would now look like:
pacman -Syu --overwrite usr/lib/libmozjs-52.so.0You can even use globs when specifying the files to overwrite. Not only is specifying exact files to overwrite a lot safer than the old --force, there are also some common sense restrictions there too (you can’t overwrite a directory with a file, or force package installs with conflicting files).

We have also added a --sysroot option that will replace --root. Basically, this now works the way people will expect – for example, the configuration file used is the one in the specified root, and not the local one. This does require a bit more setup while creating a new install root, but hopefully will be a lot more robust.

We have also added support for reproducible builds. This was mostly ensuring all files had the same timestamp and obeyed the SOURCE_DATE_EPOCH standard. We also added a .BUILDINFO file within each package, recording information about the environment a package was built in. This allows scripts to regenerate the build environment to demonstrate a package is reproducible (particularly important in rolling release distros).

There was also improved support for debugging packages. Split packages now produce a single debug package instead of one for each split package. This makes it easier to get all required debug symbols for a particular package (and hopefully easier for distros to carry these packages…). Also, we include relevant source files in the debug packages, allowing us to step through the code.

Finally, I killed off the “contrib” directory as it was taking excessive amounts of pacman developer time. That means no more checkupdates, paccache, … However, this has been picked up as a separate project, which is available by installing pacman-contrib in Arch Linux.

As always, this is a bug free release. But if you spot something you think is a bug, please file a bug report and we can assign blame – which is more important than fixing! (The pool for developer who created the first pacman bug of this release is still open at the time of posting.)

Allan@Allan McRae

IWD: the new WPA-Supplicant Replacement

May 13, 2018 08:08 PM

Heyho,
I just want to inform you all that I have pushed IWD version 0.3 into community.
IWD is a new wireless daemon and aims to replace wpa_supplicant in the future.
I have created a first wikipage for the package as well here: https://wiki.archlinux.org/index.php/Iwd

IWD comes with a more secure approach. It doesn't use OpenSSL or GnuTLS. Instead it uses different Kernel functions for cryptographic operations.

If you want to know more you can checkout this video here: https://www.youtube.com/watch?v=F2Q86cphKDo

Shibumi@Forum Announcements

js52 52.7.3-2 upgrade requires intervention

May 04, 2018 08:27 PM

Due to the SONAME of /usr/lib/libmozjs-52.so not matching its file name, ldconfig created an untracked file /usr/lib/libmozjs-52.so.0. This is now fixed and both files are present in the package.

To pass the upgrade, remove /usr/lib/libmozjs-52.so.0 prior to upgrading.

Jan Steffens@Official News

glibc 2.27-2 and pam 1.3.0-2 may require manual intervention

April 20, 2018 07:45 AM

The new version of glibc removes support for NIS and NIS+. The default /etc/nsswitch.conf file provided by filesystem package already reflects this change. Please make sure to merge pacnew file if it exists prior to upgrade.

NIS functionality can still be enabled by installing libnss_nis package. There is no replacement for NIS+ in the official repositories.

pam 1.3.0-2 no longer ships pam_unix2 module and pam_unix_*.so compatibility symlinks. Before upgrading, review PAM configuration files in the /etc/pam.d directory and replace removed modules with pam_unix.so. Users of pam_unix2 should also reset their passwords after such change. Defaults provided by pambase package do not need any modifications.

Bartłomiej Piotrowski@Official News

Battleship

February 23, 2018 08:34 PM

Solving Battleships with SAT
Kyle Keen

zita-resampler 1.6.0-1 -> 2 update requires manual intervention

February 22, 2018 07:57 AM

The zita-resampler 1.6.0-1 package was missing a library symlink that has been readded in 1.6.0-2. If you installed 1.6.0-1, ldconfig would have created this symlink at install time, and it will conflict with the one included in 1.6.0-2. In that case, remove /usr/lib/libzita-resampler.so.1 manually before updating.

Antonio Rojas@Official News

Arch monthly January

February 06, 2018 12:37 PM

Arch Linux @ FOSDEM

Arch Linux Trusted Users, Developers and members of the Security team have been at FOSDEM. Next year there will be more stickers hopefully and maybe a talk, but it was great to meet some Arch users in real life, discuss and even hack on the Security Tracker.

TU Application: Ivy Foster

A new TU applied, you can read the sponsorship here.

New DevOps member Phillip Smith

A new member joined the sysadmin/devops team. This is the team which maintains the Arch infrastructure such as the forums, AUR and wiki.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Arch monthly December

January 01, 2018 12:37 PM

Arch Linux @ 34C3

Arch Linux Trusted Users, Developers and members of the Security team have been at 34C3 and even held a small meetup. There was also an #archlinux.de assembly where people from the irc channel could meet each other. Seeing how much interest there was this year, it might be worth it to host a self organized session or assembly with more stickers \o/

Fosdem 2018

Arch Linux Trusted Users and Developers will be at Fosdem 2018 in February. We don't have a booth or developer room but you can probably find us by looking for Arch stickers or hoodies :-)

2017 Repository cleanup

The repository's will be cleaned of orphan packages, which will be moved to the AUR, where they can be picked up and taken care of.

AUR 4.6.0 Release

A new version of aurweb has been released on December third. It brings markdown support for comments and more Trusted User specific changes.

Happy 2018!

I wish everyone a happy 2018 and keep on rolling :)

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Arch monthly November

December 08, 2017 11:11 AM

New TU Andrew Crerar

Andrew Crerar applied to become a Trusted User and was accepted! Congratulations! His intentions is to move firefox-develop from the AUR to [community]

77% Reproducible packages

Currently 77% of the packages are reproducible, note that we do not vary everything yet in the two builds. For example filesystem, build patch and other options can be varied.

Pro-audio mailing list

For audio enthusiasts there is a new mailing list to discuss audio packaging, development and usage etc..

GCC and GCC-multilib merged

Now that 32 bit support is dropped, the normal GCC package has gained support to build multilib packages, simplifying packaging.

Mime-types replaced with mailcap

Mime-types is now replaced by mailcap in this change.

Arch Linux at 34C3

A few Arch Linux Developers and Trusted users will be at 34C3 in Leipzig, if you are there, meet us there! A certain Arch user was recruited after talks at congress!

Analysis of AUR and Official Arch Repository data

Brian Caffey has made some a analysis of the AUR and the Arch repositories.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Writing text in Unity

December 04, 2017 06:40 AM


Writing text in Unity isn't that easy, at least if you want to generate text with single gameobjects to be displayed in 3D and not only as a flat UI text. Every single letter must be dragged and dropped to its place to form a word.

Personally I was frustrated and didn't find a solution on the internet, so I decided to write this small script on my own which generates text in the editor while you are typing as you can see in the animation. For this example I used the letters from the Unity Asset Store package "Simple Icons - Cartoon Assets" (https://www.assetstore.unity3d.com/en/#!/content/59925). Maybe you will find this helpful or has more ideas to improve it, if so then please let me know.😉

A detailed description how to use this small script can be found in my repository on GitHub (https://github.com/isenmann/UnityTextGenerator)
ise (noreply@blogger.com)@Daniel Isenmann

Start VR development with the right toolkit

November 29, 2017 12:28 AM


More or less one year ago I have started with developing in VR and tried several plugins and tools in Unity to implement all the basic stuff like teleporting, grabbing, triggering and so on.

The first thing you will probably try or see in Unity is the official SteamVR plugin from Valve if you have a Vive like me. Basically this SDK has everything you will need to realize your project, but it's really a very hard start if you try several things with the SteamVR SDK directly. There is no much documentation or example scenes where you can have a look at. For me it was more frustrating than helpful, so searching the Unity Asset Store and also the internet you will maybe find the VRTK (Virtual Reality Toolkit) from Harvey Ball (aka TheStoneFox).

It's the best toolkit you can get if you are try to do something in Unity for VR. You get tons of documentations for nearly all use cases you can imagine. Furthermore he has created lots of examples where you find nearly all that stuff sorted and splitted in different scenes to show you how to use them. Every single script he has written is licensed under the MIT license and can be studied directly in the SDK or in his GitHub repository for the toolkit.

A very active community is discussing stuff at its own Slack channel. Even a Youtube channel exists where he posts tutorials or doing a live Q&A session to answer your questions.

But why do I tell you this? Because Harvey Ball has done an absolute astonishing job on this toolkit. You don't have to fiddle around with each single SDK for each vendor, you don't have to think about grabbing an object, teleporting around, using an object, realize a button, realizing an usable door or anything you can think of. You can such use the VRTK and use nearly all available VR headsets out there directly and start to code your idea or project right away.

And the best thing, he has decided right from the beginning that he give this away all for free! This is even more astonishing if you see all the effort which is behind such a toolkit. These are all reasons to give something back to Harvey. How? You can decide to become a patreon on his patreon page, start contributing directly in the toolkit if you are a coder, contact Harvey and ask how you can help. Right now he is really looking for some donations to go on with the development of VRTK. It would be a shame if VRTK is dying just because of the lack of support. So give and show him a little bit of love and help for this really useful and totally necessary toolkit for VR development in Unity!

Look at all the links I have posted here and decide on your own how important this project is.
ise (noreply@blogger.com)@Daniel Isenmann

Reproducible Arch Linux?!

November 26, 2017 12:37 PM

The reproducible build initiative has been started a long time ago by Debian and has been grown to include more projects. Arch is now also in the process of getting reproducible build support, thanks to the of hard work of Anthraxx, Sangy, and many more volunteers. In pacman git patches where landed to support reproducible builds which will be included in a hopefully soon next stable release! Meanwhile with help of the reproducible-builds.org rebuild infrastructure rebuilds have been started!

Currently 77% of the 17% tested packages are reproducible as can be found here. This page is fed by the work done by two Jenkins builders, which currently build the whole Arch repository.

The builder builds the package twice in different environments and then uses diffoscope to find differences in packages. Usually the differences are due to timestamps :-). Now that we have some results of rebuilds, we can start fixing our packages. The work I did so far:

  • Fixing 404 sources of our packages, some of the source failures where due to ftp://kernel.org being used and not https://www.kernel.org.

    This has been fixed in SVN. Also old pypi links needed to be fixed

  • One package's .install file contained a killall statement, I'm not sure why but it shouldn't be required so it was eradicated

  • Integrity mismatch, so upstream did a ninja re-release, annoying but fixed

  • Imagemagick's convert sets some metadata in the resized png's which makes reproducible builds fail. Since it does not adhere to SOURCE_DATE_EPOCH.

  • Missing checkdepends on pytest-runner, which is automatically downloaded by the build tools but that failed in the reproducible build. Some simply adding the depdency to checkdepends fixed it.

As you can see, only one of the bullet points was really an reproducible build issue the others where packaging issues. So I can conclude that reproducible builds will increase the packaging quality in the Arch repository. Having the packages in our repository always build-able will also help the Arch Linux 32 project.

The Arch reproducible project still needs a lot of work, to make it possible to verify a package build as a user against the repository package.

P.S.: If you are at 34C3 this year and interested, visit the reproducible build assembly.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Using the WRLD Unity SDK with a stencil mask object

November 24, 2017 09:32 AM



Maybe you have heard about the great WRLD project, which provides a great way to display real world map data in your project. Furthermore they provide several different SDKs to access these datas. 

For a small project I needed some map data to visualize them in a 3D scene inside Unity. Using the Unity SDK from WRLD you can easily access those data and they will be displayed in your scene. Sadly they render the map all over your scene and their is no restriction in size. At least I haven't found any, even if you use their script attached to a GameObject with a specific size, the map will be displayed all over the scene. 

After some fails and searching the web I stumbled upon a video showing the usage of WRLD in a AR environment. There they do exactly what I needed. Luckily the video was made by WRLD and they also provided two very good blog posts where they explained how they have done it. With the help of these blog posts I have implemented it without all the AR stuff and came up with the proof of concept you can see in the animation. 

The displayed cube is used as a stencil mask for the map and if you move the cube or the map, only the part of the map which is inside the cube will be rendered. Also new tiles of the map are loaded dynamically depending on the main camera in the scene. I have published the Unity project on GitHub to provide the solution ready to use for your project and also to archive it for myself. You will need a valid API key from WRLD, just register at their website and generate one for your needs. Then insert your API key at the WRLD Map GameObject:


The project includes the WRLD Unity SDK which you also find in the Asset Store of Unity. But be careful if you replace the included one with the official one from the store, because I have done some changes they have mentioned in their blog posts. So make sure to apply the code changes if you replace the integrated WRLD SDK.

Hope you will find it useful. If you find a bug or have useful hints then let me know, because I'm quite new in Unity and thankful for anything related to it. 
ise (noreply@blogger.com)@Daniel Isenmann

Arch monthly October

November 11, 2017 10:11 AM

This is the second edition of Arch monthly, mostly due to the lack of time to work on Arch weekly. So let's start with the roundup of last month.

New TU David Runge

David Runge applied to become a Trusted User and was accepted! He mentioned to have a huge interest in pro-audio, so hopefully there will be improvements made in that area!

Farewell 32 bit

After nine months of deprecation period, 32 bit is now unsupported on Arch Linux. For people with 32 bit hardware there is the Arch Linux 32 project which intends to keep 32 bit support going.

AUR Changes Affecting Your Privacy

The next aurweb release, which will be released on 2017-12-03, includes a public interface to obtain a list of user names of all registered users. This means that, starting on 2017-12-03, your user name will be visible to the general public. The user name is the account name you specified when registering, and it is the only information included in this list. See this link for more information.

#archlinux-testing irc channel

An irc channel has been created for coordination between Arch Linux testers. See more about becoming an official tester here.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

The end of i686 support

November 08, 2017 01:39 PM

Following 9 months of deprecation period, support for the i686 architecture effectively ends today. By the end of November, i686 packages will be removed from our mirrors and later from the packages archive. The [multilib] repository is not affected.

For users unable to upgrade their hardware to x86_64, an alternative is a community maintained fork named Arch Linux 32. See their website for details on migrating existing installations.

Bartłomiej Piotrowski@Official News

Testing your salt states with kitchen-salt

October 04, 2017 04:17 PM

What is Kitchen and why would someone use it.

test-kitchen was originally written as a way to test chef cookbooks. But the provisioners and drivers are pluggable, kitchen-salt enables salt to be the provisioner instead of chef.

The goal of this kitchen-salt is to make it easy to test salt states or formulas independently of a production environment. It allows for doing quick checks of states and to make sure that upstream changes in packages will not affect deployments. By using platforms, users can run checks on their states against the environment they are running in production as well as checking future releases of distributions before doing major upgrades. It is also possible to test states against multiple versions of salt to make sure there are no major regressions.

Example formula

This article will be using my wordpress-formula to demo the major usage points of kitchen-salt.

Installing Kitchen

Most distributions provide a bundler gem in the repositories, but there are some that have a version of ruby that is too old to use kitchen. The easiest way to use kitchen on each system is to use a ruby version manager like rvm or rbenv. rbenv is very similar to pyenv.

Once ruby bundler is installed, it can be used to install localized versions of the ruby packages for each repository, using the bundle install command.

$ bundle install
The latest bundler is 1.16.0.pre.2, but you are currently running 1.15.4.
To update, run `gem install bundler --pre`
Using artifactory 2.8.2
Using bundler 1.15.4
Using mixlib-shellout 2.3.2
Using mixlib-versioning 1.2.2
Using thor 0.19.1
Using net-ssh 4.2.0
Using safe_yaml 1.0.4
Using mixlib-install 2.1.12
Using net-scp 1.2.1
Using net-ssh-gateway 1.3.0
Using test-kitchen 1.17.0
Using kitchen-docker 2.6.1.pre from https://github.com/test-kitchen/kitchen-docker.git (at master@9eabd01)
Using kitchen-salt 0.0.29
Bundle complete! 3 Gemfile dependencies, 13 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.

This will require having a separate Gemfile to hold the requirements for running test-kitchen.

source "https://rubygems.org"

gem "test-kitchen"
gem "kitchen-salt"
gem 'kitchen-docker', :git => 'https://github.com/test-kitchen/kitchen-docker.git'

Because I am also testing opensuse, right now the git version of kitchen-docker is required.

Using kitchen

$ bundle exec kitchen help
Commands:
  kitchen console                                 # Kitchen Console!
  kitchen converge [INSTANCE|REGEXP|all]          # Change instance state to converge. Use a provisioner to configure one or more instances
  kitchen create [INSTANCE|REGEXP|all]            # Change instance state to create. Start one or more instances
  kitchen destroy [INSTANCE|REGEXP|all]           # Change instance state to destroy. Delete all information for one or more instances
  kitchen diagnose [INSTANCE|REGEXP|all]          # Show computed diagnostic configuration
  kitchen driver                                  # Driver subcommands
  kitchen driver create [NAME]                    # Create a new Kitchen Driver gem project
  kitchen driver discover                         # Discover Test Kitchen drivers published on RubyGems
  kitchen driver help [COMMAND]                   # Describe subcommands or one specific subcommand
  kitchen exec INSTANCE|REGEXP -c REMOTE_COMMAND  # Execute command on one or more instance
  kitchen help [COMMAND]                          # Describe available commands or one specific command
  kitchen init                                    # Adds some configuration to your cookbook so Kitchen can rock
  kitchen list [INSTANCE|REGEXP|all]              # Lists one or more instances
  kitchen login INSTANCE|REGEXP                   # Log in to one instance
  kitchen package INSTANCE|REGEXP                 # package an instance
  kitchen setup [INSTANCE|REGEXP|all]             # Change instance state to setup. Prepare to run automated tests. Install busser and related gems on one or more instances
  kitchen test [INSTANCE|REGEXP|all]              # Test (destroy, create, converge, setup, verify and destroy) one or more instances
  kitchen verify [INSTANCE|REGEXP|all]            # Change instance state to verify. Run automated tests on one or more instances
  kitchen version                                 # Print Kitchen's version information

The kitchen commands I use the most are: - list: show the current state of each configured environment - create: create the test environment with ssh or winrm. - converge: run the provision command, in this case, salt_solo and the specified states - verify: run the verifier. - login: login to created environment - destroy: remove the created environment - test: run create, converge, verify, and then destroy if it all succeeds

For triaging github issues, I regularly use bundle exec kitchen create <setup> and then salt bootstrap to install the salt version we are testing.

Then for running tests, to setup the environment I want to run the tests in I run bundle exec kitchen converge <setup>

Configuring test-kitchen

There are 6 major parts of the test-kitchen configuration file. This is .kitchen.yml and should be in the directory inside of which the kitchen command is going to be run.

  • driver: This specifies the configuration of how the driver requirements. Drivers are how the virtual machine is created. kitchen drivers (I prefer docker)
  • verifier: The command to run for tests to check that the converge ran successfully.
  • platforms: The different platforms/distributions to run on
  • transport: The transport layer to use to talk to the vm. This defaults to ssh, but winrm is also available.
  • suites: sets of different test runs.
  • provisioner: The plugin for provisioning the vm for the verifier to run against. This is where kitchen-salt comes in.

For the driver on the wordpress-fomula, the following is set:

driver:
  name: docker
  use_sudo: false
  privileged: true
  forward:
    - 80

This is using the kitchen-docker driver. If the user running kitchen does not have the correct privileges to run docker, then use_sudo: true should be set. All of the containers that are being used here are using systemd as the exec command, so privileged: true needs to be set. And then port 80 is forwarded to the host so that the verifier can run commands against it to check that wordpress has been setup

For the platforms, the following are setup to run systemd on the container start.

platforms:
  - name: centos
    driver_config:
      run_command: /usr/lib/systemd/systemd
  - name: opensuse
    driver_config:
      run_command: /usr/lib/systemd/systemd
      provision_command:
        - systemctl enable sshd.service
  - name: ubuntu
    driver_config:
      run_command: /lib/systemd/systemd
  - name: debian
    driver_config:
      run_command: /lib/systemd/systemd

All of these distributions except for opensuse have sshd.service enabled when the package is installed, so we only have to have one provision command to enable sshd for opensuse. The rest have a command to configure the driver run_command to the correct systemd binary for that distribution.

For suites, there is only one suite.

suites:
  - name: wordpress

If multiple sets of pillars or different versions of salt were needed to be tested, they would be configured here.

suites:
  - name: nitrogen
  - name: develop
    provisioner:
      salt_bootstrap_options: -X -p git -p curl -p sudo git develop

And there would be multiple suites with for each platform created and tested.

And lastly for the verifier.

verifier:
  name: shell
  remote_exec: false
  command: pytest -v tests/integration/

There are a couple base verifiers. I usually use the shell verifier and use testinfra which has multiple connectors to run pytest type test functions inside of the container.

Kitchen also has a $KITCHEN_SUITE variable that it sets, so if different tests files need to be run for each suite.

verifier:
  name: shell
  remote_exec: false
  command: pytest -v tests/integration/$KITCHEN_SUITE

For the salt-jenkins, since we are setting up the containers to run the SaltStack testing suite, the verifier is setup to run inside of the container, and run the salt testing suite.

verifier:
  name: shell
  remote_exec: true
  command: '$(kitchen) /testing/tests/runtests.py -v --output-columns=80 --run-destructive<%= ENV["TEST"] ? " -n #{ENV["TEST"]}" : "" %>'

remote_exec will cause the command to be run inside of the container. The kitchen command uses the installed salt to lookup if py3 was used or not, so that the correct python executable is used to run the test suite. Then if the TEST environment variable is set, that test is run, otherwise the full test suite is run.

Configuring kitchen-salt

The documentation for kitchen-salt is located here

provisioner:
  name: salt_solo
  salt_install: bootstrap
  salt_version: latest
  salt_bootstrap_url: https://bootstrap.saltstack.com
  salt_bootstrap_options: -X -p git -p curl -p sudo
  is_file_root: true
  require_chef: false
  salt_copy_filter:
    - .circleci/
    - Dockerfile
    - .drone.yml
    - .git/
    - .gitignore
    - .kitchen/
    - .kitchen.yml
    - Gemfile
    - Gemfile.lock
    - requirements.txt
    - tests/
    - .travis.yml
  dependencies:
    - name: apache
      repo: git
      source: https://github.com/saltstack-formulas/apache-formula.git
    - name: mysql
      repo: git
      source: https://github.com/saltstack-formulas/mysql-formula.git
    - name: php
      repo: git
      source: https://github.com/saltstack-formulas/php-formula.git
  state_top:
    base:
      "*":
        - wordpress
  pillars:
    top.sls:
      base:
        "*":
          - wordpress
    wordpress.sls:
      mysql:
        database:
          - wordpress
        user:
          wordpress:
            password: quair9aiqueeShae4toh
            host: localhost
            databases:
              - database: wordpress
                grants:
                  - all privileges
      wordpress:
        lookup:
          admin_user: gtmanfred
          admin_email: daniel@gtmanfred.com
          title: "GtManfred's Blog"
          url: http://blog.manfred.io
  • name: The name of the provisioner is salt_solo
  • salt_install: This defaults to bootstrap which installs using the salt bootstrap. Other options are apt and yum which use the repo.saltstack.com repository. ppa allows for specifying a ppa from which to install salt. And distrib which just uses whatever version of salt is provided by the distribution repositories.
  • salt_bootstrap_options: These are the bootstrap options that are passed to the bootstrap script. -X can be passed here to not start the salt services, because salt_solo runs salt-call and doesn't use the salt-minion process.
  • is_file_root: This is used to say just copy everything from the current directory to the tmp fileserver in the kitchen container. If there were not a custom module and state for this formula, kitchen could be set to have formula: wordpress to copy the wordpress-formula to the kitchen environment.
  • salt_copy_filter: This is a list of files to not copy to the kitchen environment.
  • dependencies: This is the fun part. If the formula depends on other formulas, they can be configured here. The following types are supported:
    • path - use a local path
    • git - clone a git repository
    • apt - install an apt package
    • yum - install a yum package
    • spm - install a spm package
  • state_top: This is the top file that will be used to run at the end of the provisioner
  • pillars: This is a set of custom pillars for configuring the instance. There are a couple other ways to provide pillars that are also useful.

Running test kitchen on pull requests.

Any of the major testing platforms should be usable. If there are complicated setups needed, Jenkins is probably the best, unfortunately I do not know jenkins very well, so I have provided examples for the three I know how to use.

My personal favorite is Drone. You can setup each one of the tests suites to run with a mysql container if you did not have states that need mysql-server installed on the instance. Also, for each job runner for Drone, you just need to setup another drone-agent on a server running docker, and then hook it into the drone-server, then each drone-agent can pick up a job and run it.

@author@@Daniel Wallace

Arch monthly September

October 02, 2017 08:00 PM

This is the first edition of Arch monthly, mostly due to the lack of time to work on Arch weekly. So let's start with the roundup of last month.

Two new Trusted Users

Alad and Foxboron joined the Trusted Users team! Congrats!

Archweb signoff helper

This has been around for a while, but Foxboron created this great tool to signoff packages in [testing] simply from the cli. If you are an official tester try it out!

Arch Classroom - Python for beginners

Pulec organizes a classroom about Python for beginners on Wednesday, October 04, 2017 at 16:00 UTC in the channel #archlinux-classroom on the freenode network. See this post for more details.

Eli Schwartz is our new bugwrangler

Eli Schwartz joins as bugwrangler by helping out assigning and investigation new bugs.

Arch-meson wrapper

If you package packages which uses meson as a build tool then the arch-meson is useful since it sets defaults for Arch.

Arch manpages website

A new website popped up which hosts Arch manual pages.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Arch Linux Vagrant boxes

September 08, 2017 05:36 PM

Hello everybody,
I am pleased to announce that we provide official Arch Linux Vagrant boxes now:

https://app.vagrantup.com/archlinux/box … 2017.09.07

URL to the project: https://github.com/archlinux/arch-boxes

Shibumi@Forum Announcements

Perl library path change

September 02, 2017 11:44 AM

The perl package now uses a versioned path for compiled modules. This means that modules built for a non-matching perl version will not be loaded any more and must be rebuilt.

A pacman hook warns about affected modules during the upgrade by showing output like this:

WARNING: '/usr/lib/perl5/vendor_perl' contains data from at least 143 packages which will NOT be used by the installed perl interpreter.
 -> Run the following command to get a list of affected packages: pacman -Qqo '/usr/lib/perl5/vendor_perl'

You must rebuild all affected packages against the new perl package before you can use them again. The change also affects modules installed directly via CPAN. Rebuilding will also be necessary again with future major perl updates like 5.28 and 5.30.

Please note that rebuilding was already required for major updates prior to this change, however now perl will no longer try to load the modules and then fail in strange ways.

If the build system of some software does not detect the change automatically, you can use perl -V:vendorarch in your PKGBUILD to query perl for the correct path. There is also sitearch for software that is not packaged with pacman.

Florian Pritz@Official News

Responsive views for Forums and Wiki

August 30, 2017 05:34 AM

Hi,

for the last few days I had been working on a more mobile friendly
view of our Wikis and Forums. These have just been deployed. Here is
what changed:

While it's not perfect on small screens it should at least be way more
readable on your mobile phones. Let me know of any issues though.

Wiki:
* Updated to MediaWiki 1.29.1
* Removed our fork of the MonoBook skin
* Introduce a new "ArchLinux" extension which injects some styles and
our navigation bar independent of the skin. This was quite a lot of
work to figure out, but future updates should be way easier now.
* The default skin is Vector; MonoBook is still available and can be
enabled in your personal settings
* The MobileFrontend extension has been removed (So we have a branded
view for mobile as well)
* PR see https://github.com/archlinux/archwiki/pull/9/files

Forums:
* Created a github repo at https://github.com/archlinux/archbbs
* PR at https://github.com/archlinux/archbbs/pull/1
* Some docker compose configuration to simplify development (similar
to the on in the wiki)

In addition to this I have been working on a re-implementation of
https://www.archlinux.de. Part of this is a new more mobile friendly
design. Especially the navigation which moves the menu entries into a
so called Hamburger menu on smaller screens is still missing from the
implementation mentioned above.

I plan to extract these "somehow" so we can use a common navigation in
all our websites. At least a generated snippet we can copy into our
projects.

Greetings,

Pierre

Pierre@Forum Announcements

Till Dawn - first pre-alpha version available (VR zombie shooter)

August 23, 2017 07:13 AM


Finally we have released our first pre-alpha version of our newly project Till Dawn. Till Dawn is a VR zombie survival shooter. Tested with the HTC Vive, but the Oculus Rift is also supported, but untested.

You can download the game on the game page at itch.io or at gamejolt and you will also find all information about the game on these pages. At the moment, it's for free but you can donate as much as you want to support us.

So grab a free copy, connect your HTC Vive, play it and let us know what you think about it. Feedback is always welcome, so if you have some ideas for the game then let us know. But remember it's under development right now, so make sure to read the description and the known issues.

  



Here are some screenshots of the game, but to get a better impression you have to play it.




It's only available for Windows right now, because SteamVR and Unity3D under Linux is a little bit more complicated. Sorry to all Linux gamers out there, but as soon as it is supported without errors, we will release it for Linux, too.

If you like the game then share the information, tweet about it (don't forget to mention @devplayrepeat), make a blog post or just tell your friends. To follow the development make sure that you regularly check the itch.io page of the game because we will post news there. 

ise (noreply@blogger.com)@Daniel Isenmann

Arch weekly #2

May 26, 2017 09:00 AM

This is the second edition of Arch weekly, a small weekly post about the news in the Arch Linux community.

Official docker image for Arch Linux!

After reporting about the Arch-boxes project last week. Pierres created the Arch Linux organization on Docker and created a base image. The docker build script can be found here. Now you can easily run Arch in docker with a base (regularly updated) image!

docker run -ti archlinux/base /bin/bash

pyalpm 0.8.1 release

A bugfix release for pyalpm, has been made it fixes one memory leak, removes some unused code and contains some build fixes.

Archweb upgrade

Archweb has been upgraded to 1.8 LTS, previously it was running on 1.7 which is no longer supported. If you encounter any issues on https://archlinux.org please report them on the bugtracker.

MariaDB upgrade important news

There are plans to update MariaDB to 10.2.6, this will change the library soname from libmysqlclient.so to libmariadb.so and some dependency changes, more details are in the link.

New Trusted User foxxx0

Thore Bödecker joins the TU team, you can read his application here.

Discussion about improving the overall experience of contributors

Bartłomiej has started a discussion on arch-dev-public about improving and getting more external contributors involved in Arch Linux. Not only could existing Arch projects such as pyalpm, archweb and namcap use more contributors for development of new features and fixing bugs. Arch could also use more contributors for new projects and ideas such as rebuild automation and the maintenance of our infrastructure. For those wondering what the infrastructure is about, Arch has a few dedicated servers for the forums, building packages, etc. all these servers are managed with ansible with the playbooks in git

Security updates of the week

The following packages received security updates:

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Arch weekly #1

May 17, 2017 06:00 PM

This is the first edition of Arch weekly, a small weekly post about the news in the Arch Linux community. Hopefully this will be a recurring weekly blog post!

linux-hardened appears in [community]

After the disappearance of linux-grsec from the repos due to the Grsecurity project not providing the required patches. Daniel Micay provides an alternative linux-hardened in [community]. The package is based on the following Linux fork which contains more security patches than in the Linux mainline kernel and enables more security configuration options by default such as SLAB_FREELIST_RANDOM.

More information can be found on the wiki of the project.

Arch-boxes project

An effort has been made by Shibumi to provide official Arch Linux docker, vagrant (and maybe ec2) images. Currently there is a virtualbox and qemu/libvirt option. View the project here.

Qt 4 now depends on OpenSSL 1.1

Even after the enormous OpenSSL 1.1 rebuild, not every package in the repository uses OpenSSL 1.1 yet. Qt 4 currently in [extra] uses OpenSSL 1.1 with 27 packages left in the repository which depend on openssl-1.0. Other OpenSSL 1.0 depending packages are now being rebuilt to stay compatible with Debian Stable and non-free software. See this bug report for more information.

Boost 1.64 rebuild

Currently a rebuild is underway, will land in [testing] soon (tm).

[pacman-dev] Repository management discussion

Allan started a discussion on improving the current repository management tooling in pacman. Feedback and patches are welcome :)

GCC 7.1 hits [testing]

GCC 7.1 has landed in [testing], please test it and reports issues!

Security updates of the week

There are quite a lot of security advisories, you can view them here.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Deprecation of ABS tool and rsync endpoint

May 15, 2017 10:55 AM

Due to high maintenance cost of scripts related to the Arch Build System, we have decided to deprecate the abs tool and thus rsync as a way of obtaining PKGBUILDs.

The asp tool, available in [extra], provides similar functionality to abs. asp export pkgname can be used as direct alternative; more information about its usage can be found in the documentation. Additionally Subversion sparse checkouts, as described here, can be used to achieve a similar effect. For fetching all PKGBUILDs, the best way is cloning the svntogit mirrors.

While the extra/abs package has been already dropped, the rsync endpoint (rsync://rsync.archlinux.org/abs) will be disabled by the end of the month.

Bartłomiej Piotrowski@Official News

How my car insurance exposed my position

May 11, 2017 12:00 AM

As many car insurances companies do, my car insurance company provides a satellite device that can be put inside your car to provide its location at any time in any place.

By installing such device in your car, the car insurance profiles your conduct, of course, but it could also help the police in finding your car if it gets stolen and you will probably get a nice discount over the insurance price (even up to 40%!). Long story short: I got one.

Often such companies also provide an “App” for smartphones to easily track your car when you are away or to monitor your partner…mine (the company!) does.

Then I downloaded my company’s application for Android, but unluckily it needs the Google Play Services to run. I am a FLOSS evangelist and, as such, I try to use FLOSS apps only and without gapps.

Luckily I’m also a developer and, as such, I try to develop the applications I need most; using mitmproxy, I started to analyze the APIs used by the App to write my own client.

Authentication

As soon as the App starts you need to authenticate yourself to enable the buttons that allow you to track your car. Fair enough.

The authentication form first asks for your taxpayer’s code; I put mine and under the hood it performs the following request:

curl -X POST -d 'BLUCS§<taxpayers_code>§-1' http://<domain>/BICServices/BICService.svc/restpostcheckpicf<company>

The Web service replies with a cell phone number (WTF?):

2§<international_calling_code>§<cell_phone_number>§-1

Wait. What do we already see here? Yes, besides the ugliest formatting ever and the fact the request uses plain HTTP, it takes only 3 arguments to get a cell phone number? And guess what? The first one and the latter are two constants. In fact, if we put an inexistent taxpayer’s code, by keeping the same values, we get:

-1§<international_calling_code>§§-100%

…otherwise we get a cell phone number for the given taxpayer’s code!

I hit my head and I continued the authentication flow.

After that, the App asks me to confirm the cell phone number it got is still valid, but it also wants the password I got via mail when subscribing the car insurance; OK let’s proceed:

curl -X POST -d 'BLUCS§<taxpayers_code>§<device_imei>§<android_id>§<device_brand>-<device_model>_unknown-<api_platform>-<os_version>-<device_code>§<cell_phone_number>§2§<password>§§-1' http://<domain>/BICServices/BICService.svc/restpostsmartphoneactivation<company>

The Web service responds with:

0§<some_code>§<my_full_name>

The some_code parameter changes everytime, so it seems to work as a “registration id”, but after this step the App unlocked the button to track my car.

I was already astonished at this point: how the authentication will work? Does it need this some_code in combination with my password at reach request? Or maybe it will ask for my taxpayer code?

Car tracking

I start implementing the car tracking feature, which allows to retrieve the last 20 positions of your car, so let’s analyze the request made by the App:

curl -X POST -d 'ASS_NEW§<car_license>§2§-1' http://<domain>/BICServices/BICService.svc/restpostlastnpositions<company>

The Web service responds with:

0§20§<another_code>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>§DD/MM/YYYY HH:mm:SS#<latitude>#<longitude>#0#1#1#1-<country>-<state>-<city>-<street>

WTH?!? No header?!? No cookie?!? No authentication parameters?!?

Yes, your assumption is right: you just need a car license and you get its last 20 positions. And what’s that another_code? I just write it down for the moment.

It couldn’t be real, I first thought (hoped) they stored my IP somewhere so I’m authorized to get this data now, so let’s try from a VPN…oh damn, it worked.

Then I tried with an inexistent car license and I got:

-2§TARGA NON ASSOCIATA%

which means: “that car license is not in our database”.

So what we could get here with the help of crunch? Easy enough: a list of car licenses that are covered by this company and last 20 positions for each one.

I couldn’t stop now.

The Web client

This car insurance company also provides a Web client which permits more operations, so I logged into to analyze its requests and while it’s hosted on a different domain, and it also uses a cookie for almost any request, it performs one single request to the domain I previously used. Which isn’t authenticated and got my attention:

curl http://<domain>/<company>/(S(<uuid>))/NewRemoteAuthentication.aspx?RUOLO=CL&ID=<another_code>&TARGA=<car_license>&CONTRATTO=<foo>&VOUCHER=<bar>

This one replies with an HTML page that is shown in the Web client:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >
<HTML>
<HEAD>
    <title>NewRemoteAuthentication</title>
    <meta name="GENERATOR" Content="Microsoft Visual Studio .NET 7.1" />
    <meta name="CODE_LANGUAGE" Content="C#" />
    <meta name="vs_defaultClientScript" content="JavaScript"/>
    <meta name="vs_targetSchema" content="http://schemas.microsoft.com/intellisense/ie7" />
        <!--<meta content="IE=EmulateIE10" name="ie_compatibility" http-equiv="X-UA-Compatible" />-->
        <meta name="ie_compatibility" http-equiv="X-UA-Compatible" content="IE=7, IE=8, IE=EmulateIE9, IE=10, IE=11" />
</HEAD>
    <body>
    <form name="Form1" method="post" action="/<company>/(S(<uuid>))/NewRemoteAuthentication.aspx?RUOLO=CL&amp;ID=<another_code>&amp;TARGA=<car_license>" id="Form1">
<input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPDwULLTIwNzEwODIsJFNAgEPKAJDIeBsdSpc2libGVnZGRic5McHC9+DqRx0H+jRt5O+/PLtw==" />

            <iframe id="frm1" src="NewRicerca.aspx" width="100%" height="100%"></iframe>


<SCRIPT language="JavaScript">
<!--
self.close
// -->
</SCRIPT>
</form>
</body>
</HTML>

It includes an iframe (sigh!), but that’s the interesting part!!! Look:

Car history

From that page you get:

  • the full name of the person that has subscribed the insurance;
  • the car model and brand;
  • the total amount of kilometers made by the car;
  • the total amount of travels (meant as “car is moving”) made by the car;
  • access to months travels details (how many travels);
  • access to day travels details (latitude, longitude, date and time);
  • access to months statistics (how often you use your car).

Car month history Car day history Car month_statistics

There are a lot of informations here and these statistics are available since the installation of the satellite device.

The request isn’t authenticated so I just have to understand the parameters to fill in. Often not all parameters are required and then I tried by removing someone to find out which are really needed. It turns out that I can simplify that as:

curl http://<domain>/<company>/(S(<uuid>))/NewRemoteAuthentication.aspx?RUOLO=CL&ID=<another_code>&TARGA=<car_license>

But there’s still a another_code there…mmm, wait it looks like the number I took down previously! And yes, it’s!

So, http://<domain>/<company>/(S(<uuid>))/NewRicerca.aspx is the page that really shows all the informations, but how do I generate that uuid thing?

I tried by removing it first and then I got an empty page. Sure, makes sense, how that page will ever know which data I’m looking for?

Then it must be the NewRemoteAuthentication.aspx page that does something; I tried again by removing the uuid from that url and to my full surprise it redirected me to the same url, but it also filled the uuid part as path parameter! Now I can finally invoke the NewRicerca.aspx using that uuid and read all the data!

Conclusion

You just need a car license which is covered by this company to get all the travels made by that car, the full name of the person owning it and its position in real time.

I reported this privacy flaw to the CERT Nazionale which wrote to the company.

The company fixed the leak 3 weeks later by providing new Web services endpoints that use authenticated calls. The company mailed its users saying them to update their App as soon as possible. The old Web services have been shutdown after 1 month and half since my first contact with the CERT Nazionale.

I could be wrong, but I suspect the privacy flaw has been around for 3 years because the first Android version of the App uses the same APIs.

I got no bounty.

The company is a leading provider of telematics solutions.

Andrea Scarpino

First x86_64 TalkingArch

April 07, 2017 05:41 PM

The TalkingArch team is pleased to present the latest version of TalkingArch, available from the usual location. This version features all the latest software, including Linux kernel 4.10.6.

The most important feature of this live image is the new x86_64 only compatibility, removing the i686 compatibility that was present in previous images. This makes the latest version much smaller, but it will no longer work on older i686 machines.

This version is the only one that will be listed on the download page, as it always only includes the latest version. However, anyone needing an image that works on i686 may still download the last dual architecture image either via http or BitTorrent, until i686 is completely dropped from the Arch official repositories later this year. The TalkingArch team is also following the latest information on an i686 secondary port, and if there is enough of a need, and if the build process is as straightforward as the x86_64 version currently available, i686 images may be provided once the port is complete and fully working.

kyle@TalkingArch

ca-certificates-utils 20170307-1 upgrade requires manual intervention

March 15, 2017 09:27 PM

The upgrade to ca-certificates-utils 20170307-1 requires manual intervention because a symlink which used to be generated post-install has been moved into the package proper.

As deleting the symlink may leave you unable to download packages, perform this upgrade in three steps:

# pacman -Syuw                           # download packages
# rm /etc/ssl/certs/ca-certificates.crt  # remove conflicting file
# pacman -Su                             # perform upgrade
Jan Steffens@Official News

mesa with libglvnd support is now in testing

February 27, 2017 08:15 PM

mesa-17.0.0-3 can now be installed side-by-side with nvidia-378.13 driver without any libgl/libglx hacks, and with the help of Fedora and upstream xorg-server patches.

  • First step was to remove the libglx symlinks with xorg-server-1.19.1-3 and associated mesa/nvidia drivers through the removal of various libgl packages. It was a tough moment because it was breaking optimus system, xorg-server configuration needs manual updating.

  • The second step is now here, with an updated 10-nvidia-drm-outputclass.conf file that should help to have an "out-of-the-box" working xorg-server experience with optimus system.

Please test this extensively and post your feedback in this forum thread or in our bugtracker.

Laurent Carlier@Official News

Using salt to build docker containers

February 17, 2017 01:09 AM

How docker works now.

When you build a docker container using only docker tools, what you are actually doing is building a bunch of layers. Great. Layers is a good idea. You get to build a bunch of docker images that have a lot of similar layers, so you only have to build the changes when you update containers. But what you end up with is this really hard to read ugly Dockerfile that is hard to leave because it tries to put a bunch of commands on the same line.

# from https://www.drupal.org/requirements/php#drupalversions
FROM php:7.0-apache

RUN a2enmod rewrite

# install the PHP extensions we need
RUN apt-get update && apt-get install -y libpng12-dev libjpeg-dev libpq-dev \
    && rm -rf /var/lib/apt/lists/* \
    && docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr \
    && docker-php-ext-install gd mbstring opcache pdo pdo_mysql pdo_pgsql zip

# set recommended PHP.ini settings
# see https://secure.php.net/manual/en/opcache.installation.php
RUN { \
        echo 'opcache.memory_consumption=128'; \
        echo 'opcache.interned_strings_buffer=8'; \
        echo 'opcache.max_accelerated_files=4000'; \
        echo 'opcache.revalidate_freq=60'; \
        echo 'opcache.fast_shutdown=1'; \
        echo 'opcache.enable_cli=1'; \
    } > /usr/local/etc/php/conf.d/opcache-recommended.ini

WORKDIR /var/www/html

# https://www.drupal.org/node/3060/release
ENV DRUPAL_VERSION 8.2.6
ENV DRUPAL_MD5 57526a827771ea8a06db1792f1602a85

RUN curl -fSL "https://ftp.drupal.org/files/projects/drupal-${DRUPAL_VERSION}.tar.gz" -o drupal.tar.gz \
    && echo "${DRUPAL_MD5} *drupal.tar.gz" | md5sum -c - \
    && tar -xz --strip-components=1 -f drupal.tar.gz \
    && rm drupal.tar.gz \
    && chown -R www-data:www-data sites modules themes

Here is a Dockerfile that is used to build a mysqld container. The RUN command in the middle is just really convuluted and I just can't imagine trying to write a container like this. But what if you could use salt to configure your docker container to do the same thing.

Using Salt States

NOTE: This is all to be added in the Nitrogen release of salt, but you should be able to drop-in the dockerng state and module from develop once this PR is merged.

It is worth mentioning that this is a contrived example, because one of the requirements to use dockerng.call is to have python installed in the docker container. So for the salt example you will need to build a slightly modified parent container using the following command.

docker run --name temp php:7.0-apache bash -c 'apt-get update && apt-get install -y python' && docker commit temp php:7.0-apache-python && docker rm temp

Now, this shouldn't be a problem when building images. This just allows for managing the layers. If I were to do this, I would take the debian image, and use salt states to setup apache and the base stuff for building the modules and things and then run the following state.

Build Drupal Image:
  dockerng.image_present:
    - name: myapp/drupal
    - base: php:7.0-apache-python
    - sls: docker.drupal

Then this would build the image with my salt://docker/drupal.sls state.

{%- set exts = ('gd', 'mbstring', 'opcache', 'pdo', 'pdo_mysql', 'pdo_pgsql', 'zip') %}
{%- set DRUPAL_VERSION = '8.2.6' %}
{%- set DRUPAL_MD5 = '57526a827771ea8a06db1792f1602a85' %}

enable rewrite module:
  apache_module.enabled:
    - name: rewrite

install extensions:
  pkg.latest:
    - names:
      - libpng12-dev
      - libjpeg-dev
      - libpq-dev

  cmd.run:
    - names:
      - docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr:
        - prereq:
          - cmd: docker-php-ext-install gd
      {%- for ext in exts %}
      - docker-php-ext-install {{ext}}:
        - creates: /usr/local/etc/php/conf.d/{{ext}}.ini
      {%- endfor %}

configure opcache:
  file.managed:
    - name: /usr/local/etc/php/conf.d/opcache-recommended.ini
    - contents: |
        opcache.interned_strings_buffer=8
        opcache.max_accelerated_files=4000
        opcache.revalidate_freq=60
        opcache.fast_shutdown=1
        opcache.enable_cli=1

get drupal:
  archive.extracted:
    - name: /var/www/html
    - source: https://ftp.drupal.org/files/projects/drupal-{{DRUPAL_VERSION}}.tar.gz
    - source_hash: md5={{DRUPAL_MD5}}
    - user: www-data
    - group: www-data
    - enforce_toplevel: False
    - options: --strip-components=1

And we are done. In my honest opinion, this is significantly easier to read. First we enable the rewrite module. Then we install the packages for compiling the different php extensions. Then we use the built in docker-php-ext-* to build the different php modules. And we put the opcache recommended plugins in place. Lastly we download and extract the drupal tarball and put it in the correct place.

There is one caveat, right now we do not have the ability to build in the WORKDIR and ENV variables, so those will have to be provided when the container is started.

Start Drupal Container:
  dockerng.running:
    - name: drupal
    - image: myapp/drupal:latest
    - working_dir: /var/www/html

I am going to look into adding those for the dockerng.create command that is used to create the starting container for the sls_build so that they can be saved for the image.

Daniel Wallace@Daniel Wallace

First public alpha release for 23 viewer

February 14, 2017 01:48 PM

Long time no post on my blog, but this time I have to. 

Finally I was able to release a first alpha version of my Android app "23 viewer". It's in an early alpha stage, but it can be used to browse the subscriptions of your contacts in the 23 photo community, to favor a photo, to comment on photos and to see the EXIF data of a photo. 

You can find the apk package for Android on the github release page here: https://github.com/isenmann/23viewer/releases

To give you some short impression, here are some screenshots of the app:




You will need an account on 23 to use the app, otherwise you are not able to do anything in the app or to see something. If you using 23, give me some feedback via github or email. Thanks and have fun with the app.
ise (noreply@blogger.com)@Daniel Isenmann

Phasing out i686 support

January 25, 2017 06:23 PM

Due to the decreasing popularity of i686 among the developers and the community, we have decided to phase out the support of this architecture.

The decision means that February ISO will be the last that allows to install 32 bit Arch Linux. The next 9 months are deprecation period, during which i686 will be still receiving upgraded packages. Starting from November 2017, packaging and repository tools will no longer require that from maintainers, effectively making i686 unsupported.

However, as there is still some interest in keeping i686 alive, we would like to encourage the community to make it happen with our guidance. The arch-ports mailing list and #archlinux-ports IRC channel on Freenode will be used for further coordination.

The [multilib] repository will not be affected by this change.

Bartłomiej Piotrowski@Official News

xorg-server 1.19.1 is now in extra

January 14, 2017 08:37 PM

The new version comes with the following changes:

  • xf86-input-libinput is the default input driver, however synaptics, evdev and wacom are still available.

  • These packages are deprecated and moved to AUR: xf86-input-joystick, xf86-input-acecad, xf86-video-apm, xf86-video-ark, xf86-video-chips, xf86-video-glint, xf86-video-i128, xf86-video-i740, xf86-video-mach64, xf86-video-neomagic, xf86-video-nv, xf86-video-r128, xf86-video-rendition, xf86-video-s3, xf86-video-s3virge, xf86-video-savage, xf86-video-siliconmotion, xf86-video-sis, xf86-video-tdfx, xf86-video-trident, xf86-video-tseng

Laurent Carlier@Official News

First TalkingArch of 2017, and other news

January 13, 2017 10:28 PM

The TalkingArch Iso

The TalkingArch team is proud to present the first TalkingArch iso of 2017. This one is the second to fix a problem with Portaudio that was causing Espeak to crash. It comes with the latest packages, including Linux kernel 4.8.13. Find it in the usual place.

Social Media

TalkingArch is now on Twitter. Follow us, tweet @ us, or even retweet release announcements, which will post there as well as here. All other ways of contacting TalkingArch still work; nothing is going away.

The Fall of i686

There has been significant talk over the past 2 to 3 weeks on the Arch Linux e-mail list about dropping support for the i686. architecture. The upshot of the discussion is that by the end of this year, no further i686 packages will be uploaded to the official repositories. To that end, Arch will most likely be producing only x86_64 isos starting in February, ending the dual architecture builds. Because TalkingArch stays as close to Arch as is possible, if Arch does stop producing dual architecture builds in February, TalkingArch will do the same. However, if an i686 version is needed, as long as packages are still being uploaded to the official repositories, an i686 build of TalkingArch can be made available upon request. That said, it will still be possible for some months to use this dual architecture build to install Arch Linux until such time as they remove i686 packages from the repositories, and that link will continue to work as long as it is usable. Thanks for the support over the years, and we look forward to serving you the latest and greatest TalkingArch for many years to come.

kyle@TalkingArch

Take It Or Leave It

January 01, 2017 09:34 AM

Happy new year!
Kyle Keen

OpenVPN 2.4.0 update requires administrative interaction

December 30, 2016 10:54 AM

The upgrade to OpenVPN 2.4.0 makes changes that are incompatible with previous configurations. Take special care if you depend on VPN connectivity for remote access! Administrative interaction is required:

  • Configuration is expected in sub directories now. Move your files from /etc/openvpn/ to /etc/openvpn/server/ or /etc/openvpn/client/.
  • The plugin lookup path changed, remove extra plugins/ from relative paths.
  • The systemd unit openvpn@.service was replaced with openvpn-client@.service and openvpn-server@.service. Restart and reenable accordingly.

This does not affect the functionality of networkmanager, connman or qopenvpn.

Christian Hesse@Official News

security.archlinux.org is online!

December 21, 2016 04:30 PM

Hello everybody,
The Arch Linux Security Team build a new security tracker for Arch Linux. With this tracker you can browse ASAs (Arch Linux Security Advisories), AVGs (Arch Linux Vulnerability Groups) and CVEs in Arch Packages.

https://security.archlinux.org

Shibumi@Forum Announcements

Practical fault detection: redux. Next-generation alerting now as presentation

December 10, 2016 07:13 PM

This summer I had the opportunity to present my practical fault detection concepts and hands-on approach as conference presentations.

First at Velocity and then at SRECon16 Europe. The latter page also contains the recorded video.

image

If you’re interested at all in tackling non-trivial timeseries alerting use cases (e.g. working with seasonal or trending data) this video should be useful to you.

It’s basically me trying to convey in a concrete way why I think the big-data and math-centered algorithmic approaches come with a variety of problems making them unrealistic and unfit, whereas the real breakthroughs happen when tools recognize the symbiotic relationship between operators and software, and focus on supporting a collaborative, iterative process to managing alerting over time. There should be a harmonious relationship between operator and monitoring tool, leveraging the strengths of both sides, with minimal factors harming the interaction. From what I can tell, bosun is pioneering this concept of a modern alerting IDE and is far ahead of other alerting tools in terms of providing high alignment between alerting configuration, the infrastructure being monitored, and individual team members, which are all moving targets, often even fast moving. In my experience this results in high signal/noise alerts and a happy team. (according to Kyle, the bosun project leader, my take is a useful one)

That said, figuring out the tool and using it properly has been, and remains, rather hard. I know many who rather not fight the learning curve. Recently the bosun team has been making strides at making it easier for newcomers - e.g. reloadable configuration and Grafana integration - but there is lots more to do. Part of the reason is that some of the UI tabs aren’t implemented for non-opentsdb databases and integrating Graphite for example into the tag-focused system that is bosun, is bound to be a bit weird. (that’s on me)

For an interesting juxtaposition, we released Grafana v4 with alerting functionality which approaches the problem from the complete other side: simplicity and a unified dashboard/alerting workflow first, more advanced alerting methods later. I’m doing what I can to make the ideas of both projects converge, or at least make the projects take inspiration from each other and combine the good parts. (just as I hope to bring the ideas behind graph-explorer into Grafana, eventually…)

Note: One thing that somebody correctly pointed out to me, is that I’ve been inaccurate with my terminology. Basically, machine learning and anomaly detection can be as simple or complex as you want to make it. In particular, what we’re doing with our alerting software (e.g. bosun) can rightfully also be considered machine learning, since we construct models that learn from data and make predictions. It may not be what we think of at first, and indeed, even a simple linear regression is a machine learning model. So most of my critique was more about the big data approach to machine learning, rather than machine learning itself. As it turns out then the key to applying machine learning successfully is tooling that assists the human operator in every possible way, which is what IDE’s like bosun do and how I should have phrased it, rather than presenting it as an alternative to machine learning.

Dieter Plaetinck

Spotify music box

December 05, 2016 07:00 PM

For some time I've wanted to play Spotify music on my stereo installation, except it doesn't have bluetooth. I do own a nice aarch64 amlogic S905X based media center which runs LibreELEC, except libspotify which I normally use in combination with mopdiy. Libspotify (a binary blob from Spotify(tm)) however does not support aarc64.

So I decided to use one of my spare ARM boards, the nanopi NEO it has USB for the DAC and ethernet for streaming music and it's supported in mailine (ethernet however requires patches) and librespot. Librespot is a service which enables my phone (or any other official spotify-client running device) to play music on the ARM board running librespot. All what was required was compiling librespot (rust program) for ARMv7.

Arch Linux ARM does not offer rust as binary package, so you'd have to use rustup and run the nightly channel because the default rust channel segfaults.

curl -sSf https://static.rust-lang.org/rustup.sh | sh -s -- --channel=nightly

Compiling on an ARM board with 512 MB ram and no swap is currently not do-able with rust even with 'cargo build -j1'. So I switched to a beefier board (orange pi pc) with 1GB ram which is luckily enough to compile libreelec.

pacman -S protobuf portaudiocargo build --release -j1

After it was compiled, install the cargo binary with 'cargo install' and copied the systemd unit from the AUR package. So far librespot works as expected and hasn't crashed while running for a week.

Later I intend to integrate the nanopi and dac in a nice case with a power button to poweroff/poweron the nanopi.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

Restoring accidental git force push overwrite on GitHub if you don't have the needed commits locally

November 14, 2016 09:33 AM

I like cleaning git history, in feature branches, at least. The goal is a set of logical commits without other cruft, that can be cleanly merged into master. This can be easily achieved with git rebase and force pushing to the feature branch on GitHub.

Today I had a little accident and found myself in this situation:

  • I accidentally ran git push origin -f instead of my usual git push origin -f branchname or git push origin -f HEAD
  • This meant that I not only overwrote the branch I wanted to update, but also by accident a feature branch (called httpRefactor in this case) to which a colleague had been force pushing various improvements which I did not have on my computer. And my colleague is on the other side of the world so I didn’t want to wait until he wakes up. (if you can talk to someone who has the commits just have him/her re-force-push, that’s quite a bit easier than this) It looked something like so:
$ git push origin -f
  <here was the force push that succeeded as desired>
+ 92a817d...065bf68 httpRefactor -> httpRefactor (forced update)

Oops! So I wanted to reset the branch on GitHub to what it should be, and also it would be nice to update the local copy on my computer while we’re at it. Note that the commit (or rather the abbreviated hash) on the left refers to the commit that was the latest version in GitHub, i.e. the one I did not have on my computer. A little strange if you’re to accustomed to git diff and git log output showing hashes you have in your local repository.

Normally in a git repository, the objects dangle around until git gc is run, which clears any commits except those reachable by any branches or tags. I figured the commit is probably still in the GitHub repo (either cause it’s dangling, or perhaps there’s a reference to it that’s not public such as a remote branch), I just need a way to attach a regular branch to it (either on GitHub, or fetch it somehow to my computer, attach the branch there and re-force-push), so step one is finding it on GitHub.

The first obstacle is that GitHub wouldn’t recognize this abbreviated hash anymore: going to https://github.com/raintank/metrictank/commit/92a817d resulted in a 404 commit not found.

Now, we use CircleCI, so I could see what had been the full commit hash in the CI build log. Once I had it, I could see that https://github.com/raintank/metrictank/commit/92a817d2ba0b38d3f18b19457f5fe0a706c77370 showed it. An alternative way of opening a view of the dangling commit we need, is using the reflog syntax. Git reflog is a pretty sweet tool that often comes in handy when you made a bit too much of a mess on your local repository, but also on GitHub it works: if you navigate to https://github.com/raintank/metrictank/tree/httpRefactor@{1} you will be presented with the commit that the branch head was at before the last change, i.e. the missing commit, 92a817d in my case.

Then follows the problem of re-attaching a branch to it. Running on my laptop git fetch --all doesn’t seem to fetch dangling objects, so I couldn’t bring the object in.

Then I tried to create a tag for the non-existant object. I figured, the tag may not reference an object in my repo, but it will on GitHub, so if only I can create the tag, manually if needed (it seems to be just a file containing a commit hash), and push it, I should be good. So:

~/g/s/g/r/metrictank ❯❯❯ git tag recover 92a817d2ba0b38d3f18b19457f5fe0a706c77370
fatal: cannot update ref 'refs/tags/recover': trying to write ref 'refs/tags/recover' with nonexistent object 92a817d2ba0b38d3f18b19457f5fe0a706c77370
~/g/s/g/r/metrictank ❯❯❯ echo 92a817d2ba0b38d3f18b19457f5fe0a706c77370 > .git/refs/tags/recover
~/g/s/g/r/metrictank ❯❯❯ git push origin --tags
error: refs/tags/recover does not point to a valid object!
Everything up-to-date

So this approach won’t work. I can create the tag, but not push it, even though the object exists on the remote.

So I was looking for a way to attach a tag or branch to the commit on GitHub, and then I found a way. While having the view of the needed commit open, click the branch dropdown, which you typically use to switch the view to another branch or tag. If you type any word in there that does not match any existing branch, it will let you create a branch with that name. So I created recover.

From then on, it’s easy.. on my computer I went into httpRefactor, backed my version up as httpRefactor-old (so I could diff against my colleague’s recent work), deleted httpRefactor, and set it to the same commit as what origin/recover is pointing to, pushed it out again, and removed the recover branch on GitHub:

~/g/s/g/r/metrictank ❯❯❯ git fetch --all
(...)
~/g/s/g/r/metrictank ❯❯❯ git checkout httpRefactor
~/g/s/g/r/metrictank ❯❯❯ git checkout -b httpRefactor-old
Switched to a new branch 'httpRefactor-old'
~/g/s/g/r/metrictank ❯❯❯ git branch -D httpRefactor
Deleted branch httpRefactor (was 065bf68).
~/g/s/g/r/metrictank ❯❯❯ git checkout recover
HEAD is now at 92a817d... include response text in error message
~/g/s/g/r/metrictank ❯❯❯ git checkout -b httpRefactor
Switched to a new branch 'httpRefactor'
~/g/s/g/r/metrictank ❯❯❯ git push -f origin httpRefactor
Total 0 (delta 0), reused 0 (delta 0)
To github.com:raintank/metrictank.git
 + 065bf68...92a817d httpRefactor -> httpRefactor (forced update)
~/g/s/g/r/metrictank ❯❯❯ git push origin :recover                                                                                                                                            ⏎
To github.com:raintank/metrictank.git
 - [deleted]         recover

And that was that… If you’re ever in this situation and you don’t have anyone who can do the force push again, this should help you out.

Dieter Plaetinck

What do I need to know if I want to start contributing to SaltStack?

November 03, 2016 11:02 PM

How I came to work on SaltStack

I was working at Rackspace doing Linux support in the Hybrid segment. I did a lot of work with supporting Rackconnect v2/v3 and Rackspace Public cloud as well as the dedicated part of the house. I started down the road to server automation and orchestration the way I think a lot of people do. At some point, I started to think... there has to be a better way.

I began with learning chef. Rackspace's devops offering had just begun and there was a lot of people using chef and poo pooing puppet in places that I was looking. I had never used ruby before, so I did some ruby practice on codecademy and I learned the basics of different blocks in ruby. I then setup wordpress. As can be seen from that repository, it has been a very long time since I did chef. I played with chef for about 6 months, and then I decided to try using Ansible and see what that was all about. I liked the idea of pushing instead of pulling and the easy deployment method was nice. But after about a month of using Ansible, Joseph Hall came to Rackspace right after the first SaltConf in 2014 and gave a 3 day class on salt. And I was in love. I loved the extensibility of salt, the reactor, the api, salt-cloud being built in. It was all just perfect for me. And by the end of the second day of the class, I had submitted my first pull request to saltstack.

My favorite thing I think about contributing to salt is how open it is to the community and how hard we all try to be welcoming to anyone new. We kind of have a No Jerks Allowed Rule, and try to be as polite and welcoming as possible.

Anyway, lets get started. This is going to be just as much as I can think of on how to go about contributing to salt.

Getting setup

I probably run my testing setup a little different than everyone else. Anyway you can get salt running to do testing is good. If it works for you, do it.

I create a server in VMWare Fusion using CentOS. Then I install epel-release and then python-pip and I do a pip install -e git://github.com/gtmanfred/salt.git@<branch>#egg=salt. This will give me everything I need to install to get salt running. Since it is also -e the git install is editable, so the changes take effect immediately and I can edit right there in ./src/salt. From here for all my changes, I can just commit right there to save for later.

Recently I have been trying to switch to using atom as my editor. I really like it. What I have been using is the remote ftp plugin. This allows for the remote directory to be setup to ~/src/salt, and then I just have that in the .ftpconfig and once connected, there is a second project window that shows the remote ftp location with all the files, and I can treat it as if it was the local file. Then once all the files are done, I can sync from the remote down to the local and make my pull request.

Either way, get a working environment going.

Here is the salt document on getting started with the development. You can ignore parts in there about M2Crypto and swig. There are no currently supported salt versions that use M2Crypto.

Another thing you could do if you were so inclined, would be to copy the module you are going to be modifying to /srv/salt/_modules or whatever dynamic directory where it belongs. You will then need to run salt-call saltutil.sync_all to sync modules to the minion or salt-run saltutil.sync_all for the master.

Writing a ... template

The first thing that I do any time I make a new file for a salt module is to add the following template.

# -*- coding: utf-8 -*-
'''
:depends: none
'''
from __future__ import absolute_import

# Import python libraries

# Import Salt libraries


def __virtual__():
    return True

Here are the things that are going on above.

  1. We require the # -*- coding: utf-8 -*- at the top of all files.
  2. Each file requires a docstring at the to to list any depends and then basic configuration for usage such as s3 credentials. It is also good to use the :depends: key if there are any required packages that need to be installed for the module to be used.
  3. I pretty much always import absolute_import. This is just useful to have and will cause less weird issues later. Plus it is the default behavior in python3, so there is nothing bad that could come from it.
  4. Then We have the two import options. Anything that gets import from salt, like salt.utils gets put under the Import Salt libraries, and all other imports get put under python libraries.
  5. Then we have the __virtual__ functions which we will go over later when we talk about the anatomy of a module.

Execution Modules

Now lets move to writing a module. I am going to demo with a contrived example of a redis module, and then go over every line.

Here is a simplified salt/modules/redismod.py file.

# -*- coding: utf-8 -*-
'''
Redis module for interactive with basic redis commands.

.. versionadded:: Nitrogen

:depends: redis

Example configuration

.. code-block:: yaml
    redis:
      host: 127.0.0.1
      port: 6379
      database: 0
      password: None
'''

from __future__ import absolute_import

# Import python libraries
try:
    import redis
    HAS_REDIS = True
except ImportError:
    HAS_REDIS = False

__virtualname__ = 'redis'


def __virtual__():
    '''
    Only load this module if redis python module is installed
    '''
    if HAS_REDIS:
        return __virtualname__
    return (False, 'The redis execution module failed to load: redis python module is not available')


def _connect(host=None, port=None, database=None, password=None):
    '''
    Return redis client instance
    '''
    if not host:
        host = __salt__['config.option']('redis.host')
    if not port:
        port = __salt__['config.option']('redis.port')
    if not database:
        database = __salt__['config.option']('redis.database')
    if not password:
        password = __salt__['config.option']('redis.password')
    name = '_'.join([host, port, database, password])
    if name not in __context__:
        __context__[name] = redis.StrictRedis(host, port, database, password)
    return __context__[name]


def get(key, host=None, port=None, database=None, password=None):
    '''
    Get Redis key value

    CLI Example:

    .. code-block:: bash

        salt '*' redis.get foo
        salt '*' redis.get bar host=127.0.0.1 port=21345 database=1
    '''
    server = _connect(host, port, database, password)
    return server.get(key)


def set(key, value, host=None, port=None, database=None, password=None):
    '''
    Set Redis key value

    CLI Example:

    .. code-block:: bash

        salt '*' redis.set foo bar
        salt '*' redis.set spam eggs host=127.0.0.1 port=21345 database=1
    '''
    server = _connect(host, port, database, password)
    return server.set(key, value)


def delete(key, host=None, port=None, database=None, password=None):
    '''
    Delete Redis key value

    CLI Example:

    .. code-block:: bash

        salt '*' redis.delete foo bar
        salt '*' redis.delete spam host=127.0.0.1 port=21345 database=1
    '''
    server = _connect(host, port, database, password)
    return server.delete(key, value)

There, that is a moderately simple example where we can talk about every thing going on.

  1. You will notice the coding line at the top like in the template
  2. Next we have the docstring.
    • There is a brief description
    • a versionadded string. Please include these if you make new modules, so that when referencing back we can see when the module was added. Also, if it is an untagged release, use the codename, otherwise use the point release where it was added. We update the code names on all versonadded added and versionchanged strings when we tag them with a release date.
    • A depends string, to let the user know that the redis python module is required.
    • An example configuration if you one is possible to be used.
  3. Then we have the imports. We catch the import error on redis, and set HAS_REDIS as False if it can't be imported so that we can reference it in the __virtual__ function and know if the module should be available or not.
  4. __virtualname__ is used to change the name the module should be loaded under. If __virtualname__ isn't set and returned by the __virtual__ function then the module would be called using redismod.set.
  5. The __virtual__ function is used to decide if the module can be used or not.
    • If it can be used and it has a __virtualname__ variable, return that variable. Otherwise if it is to be named after the name of the file, just return True.
    • If this function can't be used, return a two entry tuple where the first index is False and the second is a string with the reason it could not be loaded so that the user does not have to go code diving.
  6. Now the connect function.
    • If you include something like this, please be sure to also include the ability to connect to the module by passing arguments from the command-line and not only having to modify configuration files.
    • It is important to note, while python allows for any "private" functions to be importable and used, salt does not. The _connect function is not usable from the command-line, or from the __salt__ dictionary
    • There are a lot of includes that salt provides into different portions of salt. These are usually called dunder dictionaries.
    • Using config.get lets the configuration be put in the minion config, grains, or pillars. There is a heirarchy.
    • The lastly we have __context__. This is a really usefull for connections, because you only have to setup the connection one time, and then you can continually just return it and use it every time the module is used, instead of having to reinitialize the connection.
  7. Lastly we have the functions that are available.
    • You want a doc string that has a description, then a code example. The code example is required. This is the doc string that gets showed when you run salt-call sys.doc <module.function>
    • Then just all the logic.
    • If you have stuff that is being used a lot in multiple functions. Maybe split it out into another function for everything else to use, and if that function shouldn't be used from the command-line, be sure to prefix it with an underscore.

And that is your basic anatomy of a salt execution module.

State Modules

Now lets move on to writing state modules. State modules are where all the idempotence, configuration, and statefullness comes in. I am going to use the above module in order to make sure that certain keys are present or absent in the redis server.

Here is my simplified salt/states/redismod.py

# -*- coding: utf-8 -*-
'''
Management of  Redis servers
============================

.. versionadded:: Nitrogen

:depends: redis
:configuration: see :py:mod:`salt.modules.redis` for setup instructions

Example States

.. code-block:: yaml

    set redis key:
      redis.present:
        - name: key
        - value: value

    set redis key with host args:
      redis.absent:
        - name: key
        - host: 127.0.0.1
        - port: 1234
        - database: 3
        - password: somepass
'''

from __future__ import absolute_import

__virtualname__ = 'redis'


def __virtual__():
    if 'redis.set' in __salt__:
        return __virtualname__
    return (False, 'The redis execution module failed to load: redis python module is not available')


def present(name, value, host=None, port=None, database=None, password=None):
    '''
    Ensure key and value pair exists

    name
        Key to ensure it exists

    value
        Value the key should be set to

    host
        Host to use for connection

    port
        Port to use for connection

    database
        Database key should be in

    password
        Password to use for connection
    '''
    ret = {'name': name,
           'changes': {},
           'result': False,
           'comment': 'Failed to set key {key} to value {value}'.format(key=name, value=value)}

    connection = {'host': host, 'port': port, 'database': database, 'password': password}
    current = __salt__['redis.get'](name, **connection)
    if current == value:
        ret['result'] = True
        ret['comment'] = 'Key {key} is already value correct'.format(key=name)
        return ret

    if __opts__['test'] is True:
        ret['result'] = None
        ret['changes'] = {
            'old': {name: current},
            'new': {name: value},
        }
        ret['pchanges'] = ret['changes']
        ret['comment'] = 'Key {key} will be updated.'.format(key=name)
        return ret

    __salt__['redis.set'](name, value, **connection)

    current, old = __salt__['redis.get'](name, **connection), current

    if current == value:
        ret['result'] = True
        ret['comment'] = 'Key {key} was updated.'.format(key=name)
        ret['changes'] = {
            'old': {name: old},
            'new': {name: current},
        }
        return ret

    return ret


def absent(name, host=None, port=None, database=None, password=None):
    '''
    Ensure key is not set.

    name
        Key to ensure it does not exist

    host
        Host to use for connection

    port
        Port to use for connection

    database
        Database key should be in

    password
        Password to use for connection
    '''
    ret = {'name': name,
           'changes': {},
           'result': False,
           'comment': 'Failed to delete key {key}'.format(key=name, value=value)}

    connection = {'host': host, 'port': port, 'database': database, 'password': password}
    current = __salt__['redis.get'](name, **connection)
    if current is None:
        ret['result'] = True
        ret['comment'] = 'Key {key} is already absent'.format(key=name)
        return ret

    if __opts__['test'] is True:
        ret['result'] = None
        ret['changes'] = {
            'old': {name: current},
            'new': {name: None},
        }
        ret['pchanges'] = ret['changes']
        ret['comment'] = 'Key {key} will be deleted.'.format(key=name)
        return ret

    __salt__['redis.delete'](name, value, **connection)

    current, old = __salt__['redis.get'](name, **connection), current

    if current is None:
        ret['result'] = True
        ret['comment'] = 'Key {key} was deleted.'.format(key=name)
        ret['changes'] = {
            'old': {name: old},
            'new': {name: current},
        }
        return ret

    return ret

And lets review, this will mostly be the same as the execution module with one major difference that we use the execution module in the state.

  1. Same coding line
  2. Include depends and configuration information. If the configuration is stored with the module, you can link to the module using py mod link like i did above.
  3. Include any complex information about the state in the top doc string. It is important to also include an example state up here. But if you have more complicated states, it would be good to include examples in each function to show how they should be used.
  4. Changes to see if the redis.set_key module is loaded in the __salt__ dunder. If it is not loaded, we know we can't do any work in this state, and we should return False.
  5. Now we get to writing a state
    • We have a return dictionary and it always includes the following:
      • name: the string name of the state
      • changes: a dictionary of things that were or could be changed
      • pchanges: a dictionary of potential changes that is used if test=True is passed
      • result: True, False, None
      • comment: a string describing what happened in the state.
    • I always start with a default ret variable that describes what happens when the state fails, so I can just return it on failure at the end.
    • Then the first thing to do is check if the state is already as it should be. In the case of present we check if the key is already set to the desired value. For absent we check if the key is set to None which indicates that it is a null value which is what redis considers deleted. If it is already set, we set results to True, and set the comment to reflect that it is True, and return the dictionary.
    • There is also a testing run portion of the state we should check if __opts__['test'] is True which would signify that test=True was passed on the command-line. Then we should only set changes to reflect what is going to change, and return with result set to None to signify that it should be a successful change, but changes are required.
    • Last, we make the change, then check if the change took affect. If it did, result should be True, and we return with the correct stuff in changes and an updated comment.
    • Otherwise we return with our False dictionary we setup at the beginning.

One other thing to remember about is the mod_init and mod_watch functions. These can be used to change the way the module behaves when initially called. The mod_watch is the part that is actually called when you watch or listen to a state in your requisites.

Running pylint on your changes

We run pylint on every change, so it is a good thing to know about because you can start adjusting yourself to write more inline with what pylint wants. The only big thing that I will say you should know is that our line limit is actually 119 instead of 80.

Now, to run pylint, you are going to need a few things. You should install all the dependencies for salt, and you should install the stuff for dev_python27.txt. But then you also need to update to the newest version of SaltPylint and SaltTesting.

pip install -r requirements/dev_python27.txt -r requirements/raet.txt -r requirements/zeromq.txt
pip install --upgrade SaltPyLint SaltTesting

And then you can run pylint on your code before submitting a PR.

pylint --rcfile=.testing.pylintrc --disable=W1307,E1322 salt/

Getting the docs working

Unfortunately, if you write a new module, sphinx is unable to discover than and just import the docstrings for you, so we will need to create a few files to reference the ones above.

First, we autoload the docstrings for the actual doc file.

doc/ref/modules/all/salt.modules.redismod.rst

==================
salt.modules.redis
==================

.. automodule:: salt.modules.redismod
    :members:

doc/ref/states/all/salt.states.redismod.rst

==================
salt.states.redis
==================

.. automodule:: salt.states.redismod
    :members:

Then they will get compiled, then we have to add the following references to the correct index files and just add redis to doc/ref/modules/all/index.rst and doc/ref/states/all/index.rst so that they will be visible in the index pages for all execution modules and all state modules.

Creating a pull request

We love pull requests. Just look at the github repository, there have been 23,296 pull request, almost all of which I would bet are accepted, and almost none have been closed with saying we can't accept that. There have been 1642 contributors as of this writing!

Here are things to remember when opening a pull request.

  • If it is a new feature, add it to develop. We include a very easy way to take the changes and import them into a running system, we don't want to break other peoples deploys by adding new features into point releases.
  • If it is a bug fix, go back to the latest supported release, and add it there. Right now, unless it is a CVE change, the oldest supported release for commits is 2016.3, everything else is in phase 3 or extended life support. (We are working very hard to get Carbon out the door right now.)
  • Please fill out the form! As much of the form in the pull request that makes sense, provide us with as much information about the change you are making. I am bad about it to, sometimes I just think Mike Place or Nicole Thomas are mind readers and can just get what I mean, but the definitely can't. So let them know in detail what you are actually changing.
  • I will cover this in a later part, but please! provide unittests if at all possible! (though not required)

End of Part 1

This was a lot longer than I thought it was going to be. I am going to try and continue next week and talk about beacons and engines and some specifics to look for there. Hopefully this will be helpful for some reason. It basically just became a link dump to a lot of useful information in our documentation since it can be sometimes hard to find.

Leave any comments if you have any thing that you would like to be covered

Daniel Wallace@Daniel Wallace

ttf-dejavu 2.37 will require forced upgrade

October 31, 2016 09:36 AM

ttf-dejavu 2.37 will change the way fontconfig configuration is installed. In previous versions the configuration was symlinked from post_install/post_upgrade, the new version will place the files inside the package like it is done in fontconfig now.

For more information about this change: https://bugs.archlinux.org/task/32312

To upgrade to ttf-dejavu 2.37 it's recommended to upgrade the package on its own: pacman -S --force ttf-dejavu

Jan de Groot@Official News

Using webhooks and the reactor with masterless minions

October 14, 2016 10:17 PM

Getting Started

This requires at least the 2016.11.0 release of saltstack.

http://repo.saltstack.com/

Then just yum install -y salt-minion

Modules from Nitrogen

You will also need a few new modules and a new engine that will be in the Nitrogen release.

In /srv/salt/_modules you will need the following two modules: new hashutil module and new event module

And the thing that makes it all possible the new webhook engine needs to be put in /srv/salt/_engines: Webhook engine

Once these are all in place, run salt-call saltutil.sync_all to make sure they get put in the extmods directory and are usable.

Configuration

My configurations are located here but I will highlight some of the specifics here.

First, we want to make the minion a masterless minion, and to never query the master for anything. So to /etc/salt/minion.d/local.conf add

local: True
file_client: local
master_type: disable

Any one of the settings could be used, I like to use all three just to make certain.

Second, we need to setup the ssl keys so that we can have a secure connection. To do this, you can run the following command to create a generic ssl certificate, if you want to have verification in there, you can make a nice one for the domain and everything, but we just want to have the traffice encrypted, so use salt-call --local tls.create_self_signed_cert. Now that we have an ssl certificate pair, we can setup the webhook engine. I put the following in /etc/salt/minion.d/engines.conf.

engines:
  - webhook:
      address: None
      port: 5000
      ssl_crt: /etc/pki/tls/certs/localhost.crt
      ssl_key: /etc/pki/tls/certs/localhost.key
  - reactor: {}

reactor:
  - 'salt/engines/hook/update/blog/gtmanfred.com':
    - salt://reactor/blog.gtmanfred.com.sls

This will enable the webhook on all ips on port 5000 with the listed ssl certificate. It will also enable the reactor to be able to act upon the one tag in the event stream, which we will get to later.

Now we need to setup the github webhook so we can see the events in the event stream. Go to your blog's github repository, and go to the settings. Then select webhooks, and create a new one.

Configure Github

For the "Payload URL" you are going to set https and then the ip address/domain and port to access, followed by the uri which should match what your are going to trigger on for the reactor. As you can see in the picture above, I have /update/blog/gtmanfred.com as my URI, and this matches what follows the prefix salt/engines/hook in the reactor config above. Be sure to add a secret! And don't forget it! We will be verifying that in a later step. Then customize which events you would like to trigger on and save. I am going to rebuild the blog on each push, so I am only sending push events.

BE SURE TO DISABLE SSL VERIFICATION IF YOU DON'T USE A SIGNED KEY!

Before you forget that secret key, we should save it somewhere. I use sdb in salt so that I can save my states and reactors in public github, but hide the secret key in sdb. Create /etc/salt/minion.d/sdb.conf with the following.

secrets:
  driver: sqlite3
  database: /var/lib/sdb.sqlite
  table: sdb
  create_table: True

Now run salt-call --local sdb.set sdb://secrets/github_secret <secretkey> to save the key.

Now the last step, creating the reactor file in the salt fileserver. Mine is in /srv/salt/reactor/blog.gtmanfred.com.sls, so I just have to reference it with salt://reactor/blog.gtmanfred.com.sls (and can also use the reactor files from gitfs).

{%- if salt.hashutil.github_signature(data['body'], salt.sdb.get('sdb://secrets/github_secret'), data['headers']['X-Hub-Signature']) %}
highstate_run:
  caller.state.apply:
    - args: []
{%- endif %}

Lets walk through this. We are going to take the data['body'] from the github post, and our secret, and the X-Hub-Signature and run it through the github_signature function to verify if the signature is the result of signing the body with your secret key. If True comes back, we can be sure this came from github, and then our minion runs a highstate on itself. If it is False, nothing is rendered and no event is run.

Daniel Wallace@Daniel Wallace

Git credential helper pass

October 14, 2016 11:17 AM

At work we use Git with https auth, which sadly means I can't use ssh keys. Since I don't want to enter my password every time I pull or push changes to the server, I wanted to use my password manager to handle this for me.

Git has pluggable credential helper support for gnome-keyring and netrc, adding pass support turned to be quite easy.

Create a script called "pass-git.sh" and put the following contents in it where 'gitpassword' is your password entry.

#!/bin/bash
echo "password="$(pass show gitpassword)

In your git directory which uses https auth execute the following command to setup the script as a credential helper.

git config credential.helper ~/bin/pass-git.sh

Voila, that's all that's it.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

5 euro USB logic analyzer review

September 26, 2016 09:00 PM

I've found this cheap 5 euro USB logic analyzer via cnx.com and bought it from aliexpress.

It turned out to be quite easy to get the analyzer working on Arch Linux, the following packages need to be installed:

pacman -S pulseview

The firmware for the device is only available in the AUR.

cower -dd sigrok-firmware-fx2lafwcd sigrok-firmware-fx2lafw && makepkg -si

To be able to use the logic analyzer in pulseview without running it as root, requires an udev rule to be setup. Since libsigrok does not provide this udev rule, which might be considered as a packaging bug of Arch Linux.

wget http://pkgbuild.com/~jelle/60-libsigrok.rules# As root / sudocp 60-libsigrok.rules /usr/lib/udev/rules.d/udevadm control --reload

To test the logic analyzer I soldered a simple board which 'taps' the serial communication from the nanopi neo to my laptop. After connecting the ground and RX, TX from the board to the logic analyzer I started pulseview. Then select the saleae logic (or a different name) as device and press run while making some noise in the program connected to the tty device (for example screen).

Pulseview 1

In pulseview some peaks should show up, it might help to increase the sample size (for example to 1M).

Configuring pulseview to decode UART data is as easy as selecting the UART option from the GUI.

Pulseview 2

Select the connected RX/TX corresponding to how you hooked up them up to the logic analyzer, the baud rate, data bits etc. might be different in your setup.

Pulseview 3

And voila, pulseview decodes the UART data into a readable ascii representation.

Pulseview 4

Summary

This is the first logic analyzer I've ever used and so far it has exceeded my expectations. It was easy to get pulseview working, all the software which is required is fully open source. The analyzer should be able to analyze I2C, SPI and UART, limited up to 24 MHz. So far worth the 5 euro :-)

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

arch-audit

September 25, 2016 12:00 AM

I started a tiny project a couple of days ago: arch-audit.

arch-audit main (and unique) goal is to display the Arch Linux packages that are affected by known vulnerabilities on your system.

To do that, arch-audit parses the CVE page on the Arch wiki, which is maintained by the Arch CVE Monitoring Team.

arch-audit output is very verbose when it’s started without any argument, but two options --quiet (or -q or -qq) and --format (or -f) allows to change the output for your use case. There’s also a third option --upgradable to display only packages that have already been fixed in the Arch Linux repositories.

In my opinion a great use case is the following:

$ ssh www.andreascarpino.it
openssl>=1.0.2.i-1
lib32-openssl>=1:1.0.2.i-1
Last login: Sat Sep 24 23:13:56 2016
$

In fact, I added a systemd timer that executes arch-audit -uq everyday and saves its output to a temporary file that is configured as banner for SSH. Then, every time I log into my server, I get notified about packages that have vulnerabilities, but that already have been fixed. Time to do a system update!

So, now I’m waiting your feedbacks! Have fun!

BTW, Lynis already added arch-audit support!

Andrea Scarpino

FriendlyARM NanoPi NEO review

August 21, 2016 04:00 PM

The NanoPI NEO is a little 8 dollar ARM device with an interesting form factor and specifications.

  • 512/256 MB ram (single slot)
  • Cortex-A7 Quad-Core
  • USB 2.0
  • 100 Mbps ethernet
  • 40 x 40 mm board size
  • SD card slot

FriendlyARM NANO

Mainline support

FriendlyARM provides an UbuntuCore image, but of course I want to run Arch Linux ARM on it.

To use Arch Linux ARM on the NanoPI you will have to compile your own kernel and u-boot, I've used the armv7 tarball from archlinuxarm.org. The mainline kernel does not support the board yet, so a posted DTS file is required on top of the mainline kernel. This will provide a kernel without ethernet support, ethernet support can be added by compiling this Linux tree which hopefully lands in 4.9 or later. The DTS file has to be edited to add ethernet support, just append the following underneath the usbphy node.

&emac {
  phy-handle = <&int_mii_phy>;
  phy-mode = "mii";
  allwinner,leds-active-low;
  status = "okay";
};

After setting up the correct .dtb and zImage for the kernel, u-boot has to be compiled. U-boot master contains support for the nanopi_neo and has to be compiled as following.

make -j8 ARCH=arm CROSS_COMPILE=arm-none-eabi-  nanopi_neo_defconfig
make -j8 ARCH=arm CROSS_COMPILE=arm-none-eabi-

Power usage

The board connected with an ethernet cable uses 5V and \~ 0.10 A which means 5 * 0.10 / 1000 = 0.0005 kWh. Running the nanopi NEO a year will costs 0.0005 * 24 * 365 = 4.38. The dutch price per kWh is \~ 0.22 cents, so 4,38 * 0.22 cents = 1 euro!

Heating issues

The board seems to have some power issues reported on the sunxi wiki, so it's recommended to use a heatsink.

Summary

Overal it looks like a fun, low powered board which can be useful for example running small services: mqtt, taskd server, a webcam server or collecting sensor data using the gpio pins.

Jelle van der Waa (jelle@vdwaa.nl)@Jelle Van der Waa

TeXLive 2016 packages are now available

August 06, 2016 06:35 PM

TeXLive packages have been updated to the 2016 version.

The most notable change is that the biber utility is now provided as a separate package. You can install it normally using pacman.

Pacman hooks are now used in the TeXLive packages so the update will be less verbose than in past years.

Rémy Oudompheng@Official News

PHP 7.1 beta packages for Arch Linux

July 24, 2016 08:58 AM

The first beta version of PHP 7.1 has been released and its time to have a look at the next iteration of the PHP 7 series. You will find a set of packages in my repository:

[php]
Server = https://repo.pierre-schmitz.com/$repo/os/$arch

Insert these lines on top of the other repository definitions in your /etc/pacman.conf. A copy of the PKGBUILDs I used to create these packages are available in my git repository.

Packaging

I intend to update these with beta versions and release candidates till the final release of PHP 7.1.0 later this year. Even though I will try to provide a smooth update path, please be prepared to encounter problems.

Despite of a having a new module API, third party modules seem to work fine after a simple rebuild, in contrast to our first contact with PHP 7. All these modules are available in my repository as well.

New features

With the new minor release we will get more improvements of the scalar and return type declarations introduced in PHP 7.0. My favorite new features are:

  • Nullable Types
    Being able to declare a type to be either specific or null was a missing feature in version 7.0 which lead people to not declare any type at all.
  • Void Return Type
    You are now able to declare a function to never return anything; another missing piece of the new return type declarations.
  • Iterable type
    We are finally able to declare a type that matches an array but also classes that implement the Traversable interface. In short it is anything you can use with foreach(). This means we no longer need to put primitive arrays into traversable objects if we like to use type hinting.
  • Class constant visibility modifiers
    Using constants internally is less awkward as we are now able to declare them with private visibility. People no longer need to abuse private properties to document that certain constants should not be used from a foreign context.

A complete list of changes can be found in the PHP 7.1 NEWS file. Also see the continuously updated UPGRADING file.

Testing and benchmarking

While PHP 7.1 is still under development the packages I provide are configured with production settings. Optimizations are turned on, all debugging functions and information are disabled and stripped from the binaries. This means you may use these to test and benchmark your applications and server setups.

Let me know of any issues and share your experiences with the first minor update of PHP 7.

Pierre@Pierre Schmitz

test-sec-flags: Call for Assistance

July 18, 2016 05:25 AM

Inspired by discussions on the arch-general mailing list, test-sec-flags was created by pid1 (with help from anthraxx, strcat, sangy, and rgacogne) to test the performance impact of various security-oriented compilation and linking flags. The goal is to determine if these flags can be the new default for all Arch Linux packages. Preliminary results suggest that the performance impact is almost nonexistent compare to the compilation flags we already use, but we would like to collect and compare more results before moving forwards.

Download the source here and see the README for installation and usage instructions. The results subdirectory contains instructions on how to pull out the relevant statistics from the result files.

We are collecting results in the test-sec-flags wiki on Github. Please add your results there. In particular, we would very much like i686 results, as all of the previous contributors have been on x86_64 devices.

Patches welcome.

Allan McRae@Official News

Arch Linux Milestone

July 11, 2016 02:18 PM

Last weekend, Arch Linux hit the milestone of 50000 bug reports, about 9 years, 9 months, 4 days since the first bug was filed.

Allan@Allan McRae

screen-4.4.0-1 unable to attach old sessions

June 26, 2016 06:39 PM

As you upgrade to screen-4.4.0-1 you will be unable to reattach sessions started with earlier screen versions. Please make sure all your sessions are closed before upgrading.

Gaetan Bisson@Official News

More Breakage

June 21, 2016 07:02 AM

I broke reddit

#allanbrokeit

Allan@Allan McRae

Choqok 1.6 Beta 1

June 17, 2016 12:00 AM

I’m happy to announce that we will release Choqok 1.6 next month! (mid July)

This will be the first release after the KDE frameworks port and many things have been fixed in those 16 months, including:

  • Twitter: fix user lists loading (BUG:345641)
  • Twitter: allow to select any follower when sending a direct message
  • Twitter: fix searches by username
  • Twitter: fix searches by hashtag
  • Twitter: show original retweet time (BUG:343438)
  • Twitter: fix external URL to access direct messages and tweets
  • Twitter: send direct message without text limits
  • Twitter: support to send and view tweets with quoted text
  • Twitter: allow to delete direct messages
  • Twitter: always show ‘Mark as read’ button
  • GNU Social: fix medium attachment to post
  • GNU Social: allow to send direct messages
  • Pump.IO: do not show resend button for own posts
  • Pump.IO: display avatar image in own posts on the right
  • Pump.IO: do not create a post if there’s no text
  • Fix removal of accounts with spaces in their name
  • Add scalable versions of Choqok icon
  • Check the result of external URL opening to report any failure (BUG:347525)
  • Fixed a bug that overwrite an account with the same alias if you use the same alias
  • Do not allow to send quick posts with no text
  • Always use HTTPS when available
  • ImageView: dropped Twitpic and Tweetphoto support (service are dead)
  • A couple of segmentation fault fixed

Oh, we also added official support for Friendica!

But there’s still a lot to do!

Please join the KDE translation team and help us with translations or try this 1.6 beta and report any bug or, still, join the development team and fix the open bugs.

Together we can make the next release a new starting point for Choqok!

Andrea Scarpino

Maintainers Matter

June 16, 2016 11:31 AM

The case against upstream packaging (postscript)
Kyle Keen

Maintainers Matter

June 15, 2016 11:51 AM

The case against upstream packaging
Kyle Keen

June’s TalkingArch, and Other News

June 08, 2016 07:16 PM

The TalkingArch Release

The TalkingArch team is pleased to bring you the newest TalkingArch release. This version includes version 4.5.4 of the Linux kernel, the ndisc6 package with some IPv6 networking utilities and uefi-tools for booting x86_64 systems with UEFI. Find it in the usual spot.

Torrent Changes

Some users who downloaded Talkingarch using Bittorrent may have noticed that the TalkingArch iso is now being downloaded into a directory, along with its signature files. This is being done for convenience, so that downloading the one torrent file gets everything, rather than forcing users who want to verify the integrity of the iso to go back to the website to download the signatures. This change was introduced with the previous release, but that release wasn’t posted here. In any case, enjoy the new format.

TalkingArch and IPv6

The TalkingArch team is very pleased to announce full IPv6 connectivity for all TalkingArch sites and services. Anyone with IPv6 connectivity will immediately notice the new addresses for the connections they make, including connections to the NetwIRC where the #talkingarch IRC channel resides. TalkingArch has now joined the future of the internet,. Thanks to all who answered questions and made the transition run smoothly. Have a virtual beer, on the house.

kyle@TalkingArch

Planet Arch Linux

Planet Arch Linux is a window into the world, work and lives of Arch Linux hackers and developers.

Last updated on July 18, 2018 07:33 PM. All times are normalized to UTC time.