Getting started with unit tests in Qt Creator and Catch

I have written about unit testing in Qt Creator on multiple occasions earlier. Since then, a new testing framework seems to have become a new leader in the field, and that is Catch. It is similar to UnitTest++ and Google’s gtest and brings the best of both worlds. It is header-only and appears to reduce the amount of code that must be written in comparison to the other two.

This is an adopted version my previous posts that shows how to use Catch in a Qt Creator project. It doesn’t any longer integrate the testing output into Qt Creator’s issue pane. Instead I’ve started to rely on Travis CI or Jenkins to run the tests and notify me about any newly introduced errors.

An example project using this code structure has been posted on Github by Filip Sund (thanks, Filip!) and further adapted by us with the move to Catch.

The structure of the project is as follows:

├─ defaults.pri
├─ app/
│  ├─
│  └─ main.cpp
├─ src/
│  ├─
│  └─ myclass.cpp
└─ tests/
   └─ main.cpp

The main project file, will now be based on a subdirs template, and may look like this:

TEMPLATE = subdirs
    src \
    app \
app.depends = src
tests.depends = src

The app.depends and tests.depends statements makes sure that the src project is compiled before the application and tests. This is because the src directory contains the library that will be used by both the app and the tests.

CONFIG+=ordered is needed because depends doesn’t affect the order during make install.


Each of the other .pro files will include defaults.pri to have all the headers available. defaults.pri contains the following:


If the library, main program and tests use common libraries, it is very useful to have the defaults.pri define these dependencies too.


In the src folder, I have myclass.cpp, which is the class that I want to use and test. The needs to compile to a library, so that it may be used both by app and tests, and could look something like this:

CONFIG -= qt

TARGET = myapp

SOURCES += myclass.cpp
HEADERS += myclass.h

What this class does is not so interesting, it could be anything. A simple example could be this header file:

#ifndef MYCLASS_H
#define MYCLASS_H

class MyClass {
    double addition(double a, double b);

#endif // MYCLASS_H

With this accompaning source file:

#include "myclass.h"

double MyClass::addition(double a, double b) {
    return a * b;


I only have a main.cpp file in app, because the app is basically just something that uses everything in the src folder. It will depend on the shared compiled library from src, and would look something like this:


CONFIG += console
CONFIG -= app_bundle
CONFIG -= qt

SOURCES += main.cpp

LIBS += -L../src -lmyapp

The main.cpp file could be a simple program that uses MyClass:

#include <myclass.h>
#include <iostream>

using namespace std;

int main()
    MyClass adder;
    cout << adder.addition(10, 20) << endl;
    return 0;


In the tests folder I have simply added a main.cpp which will run the tests. Then has the following contents:


CONFIG   += console
CONFIG   -= app_bundle
CONFIG   -= qt

SOURCES += main.cpp

LIBS += -L../src -lmyapp

Which now links to the myapp library in addition to the unit tests.

The main.cpp in tests file which could contain the following, if we were to use UnitTest++ as our testing library:

#define CATCH_CONFIG_MAIN  // This tells Catch to provide a main() - only do this in one cpp file
#include "catch.hpp"
#include <myclass.h>

TEST_CASE( "MyMath", "[mymath]" ) {
    SECTION("Addition") {
        MyClass my;
        REQUIRE(my.addition(3,4) == 7);

This test will fail because my implementation of MyClass::addition is completely wrong:

class MyClass {
    double addition(double a, double b) {
        return a * b;

Note that I’m including MyClass by through <myclass.h> which is possible because of the INCLUDEPATH variable in defaults.pri.

This should help you define a project that compiles a library, as well as tests and an application using the library.

Why I won’t be starting my next project in Rust

I have been inspired to learn Rust and Julia lately. The idea was to use these promising languages in all of my new projects. I wanted to learn Rust because it is a safe and fast replacement for C++. I wanted to learn Julia because it is a language tailored for scientific use that might someday replace Python.

I quickly realized that I would be learning both for the wrong reason. I already know C++ and Python well, and should be starting important new projects in these languages.

One article that changed my mind was “Criticizing the Rust Language, and Why C/C++ Will Never Die” by Eax Melanhovich. Not all the points Melanhovich makes are longer valid. For instance, the speed of Rust has improved significantly over just the past months. However, he is right in that the number and quality of development tools for Rust is way lower than that of C++. And it is hard to argue with the number of vacant positions for  C++ programmers in comparison to those of Rust and Julia.

This does of course not say anything about the quality of these languages. For all I know, Rust may be (or become) a much safer, faster and elegant language than C++. But I understand that many of the benefits I would get from Rust are already available in the new C++11 and C++14 standards. And from using C++ in a modern way.

I found Rust attractive because of its promise of memory safety. It is designed in a way so that you won’t be able to shoot yourself in the foot as easily as you can in C++. However, I’m currently using pointers in C++ the old-fashioned way. I still work with raw pointers, new and delete when I should be using smart pointers. Sometimes I even use pointers when references and values would be the right choice.

I realize now that I need to start a project or two where I use only modern C++. That should hopefully teach me how to steer clear of those nagging segfaults sometime in the future. I’ll be reporting back here about my experiences and will make a list of some recommendations for others who are trying to do the same.

Today I called Python from QML

I always use QML with Qt Quick for GUI programming. It’s incredible both for prototyping and larger applications. I find it easy to express myself in QML, because it is so flexible. It’s declarative. You can bind a button’s position to the value of a slider in just one line. Not that you’d ever want to do that, but it shows how easy it is to connect objects together. I really wish the web was written in QML and not HTML.

Typically, I’m working with both QML and C++. I write high performance and visualization code in C++ and define GUI elements in QML. But today I wanted to use QML and Python together because I’m working on an experiment browser for our lab. Not an experimental browser, but an application that lists all experiments and makes it easy to export data for further analysis.

We have decided to organize the experiments as HDF5 files. To read these files I want to use the Python HDF5 package. I could use the C++ HDF5 library, but using Python should hopefully make the application more easily maintainable in the future, also for non-C++ coders in our lab. To do this, I figured there were two possibilities: PyQt and PyOtherSide.

In brief (and a bit simplified), PyQt calls Qt code from Python, while PyOtherSide calls Python code from QML. The difference isn’t really that big, so it just boils down to where the business logic resides. I figured that PyOtherSide would be the better option for us, because it allows everyone to help out with the Python code without learning anything about Qt. PyQt would on the other hand require everyone to have at least some understanding of the Qt framework to make changes in the code. More of the business logic will have to take place in QML, though.

PyOtherSide is really simple in use. You just define a Python object in QML and this loads modules and files from the Qt resource file (qrc). Once loaded, Python functions can be called from QML and their results are automatically converted from Python types to Qt types:

import io.thp.pyotherside 1.3

Python {
    property bool ready: false
    function loadData() {
        if(!ready) {
        call("hdf5_loader.read_experiments", [], parseData)

    function parseData(result) {
        for(var i in result) {
            var element = result[i]
    Component.onCompleted: {
        importModule("hdf5_loader", function() {
            ready = true

As you can see, all Python calls are asynchronous. You can also make synchronous Python calls, but this is not recommended because it could cause the QML GUI to stall while waiting for the results.

The Python code in this case is just a simple function that returns a dictionary:

def read_experiments():
    return {{"experimenter": "test", 
             "date": "2015-10-26"}, 
            {"experimenter": "test2", 
             "date": "2015-10-25"}}

Now I just need to make this code read and parse real HDF5 files.

Straight from the source: NEURON’s incredible backwards compatibility

NEURON is a neural simulator, but it’s not just another neural network application. NEURON has been around much longer than your fancy deep learning software. It was first released in 1994, although some of it’s early code and concepts seem to stem from the 70’s and 80’s. In fact, NEURON is not a machine learning library at all. It simulates real neurons. It uses real morphologies and ion channel densities to reproduce experiments. Some labs even use machine learning algorithms to fit the parameters of their NEURON models. NEURON is one of the workhorses of the computational branch of the Human Brain Project and the core engine in the Blue Brain Project.

But the age of NEURON is starting to show in its source code. Because some things are a bit awkward when working with this simulator I decided to peek at its sources. Just to see what this old engine looks like on the inside. And it turned out to be an interesting journey. Deep inside NEURON’s plotting library, I found this:

#if VT125
case VT:
vtplot(mode, x, y);

Now, I didn’t know what VT125 was when I first saw this, but a quick search on the web reminded me that I’m still a young software developer. I present to you, the VT100:

VT100 terminal
I couldn’t find a good picture of the VT125. I apologize if it looks way more modern than its older cousin.

Now that’s what I call backwards compatibility!

Some of you may want to lecture me about how modern terminals actually emulate VT100s and that this code might be in there to allow some fancy plotting inside xterm or its siblings. But I would argue that there are other parts of this code that give away its impressive attempt at supporting older platforms. Such as the rest of the above switch statement:

		switch (graphdev)
		case SSUN:
			hoc_sunplot(&amp;amp;amp;amp;text, mode, x, y);
#if NRNOC_X11
#if NeXTstep
		case NX:
#if TEK
		case ADM:
		case SEL:
		case TEK4014:
			tplot(mode, x, y);
#if VT125
		case VT:
			vtplot(mode, x, y);

Now, we all love nested preprocessor if-statements inside switch blocks, but let’s look aside from that and at what’s supported here.

There’s the NRNOC_X11, which I believe introduces the only part of this block that might actually be executed nowadays. In addition we have SUNCORE, which might be Solaris, but I’d bet this supports something that existed long before Oracle acquired Sun back in 2010. There’s TEK, which may refer to something like the Tektronix 4010:

This baby was made in 1972, but it still runs NEURON’s plotting functions. Just download the 5 MB zip-file of NEURON’s source code and … oh, wait.

And then there’s NeXTstep, which Apple acquired in 1997 and used to replace its own Mac OS with Mac OS X. Considering that NeXTstep made its last official release in 1995, I think its fair to say that this piece of code was written in the previous century. It could, of course, just be that no one has deared to search-replace NeXTstep with OS X, but I doubt it.

I should finish this with a final caveat: It could be that this code isn’t at all in use by NEURON anymore. After all, I found all of the above in the plot.c file in the src/oc folder of NEURON’s source code. The above could be remainders of NEURON’s interpreter language, hoc, which was originally made as a demonstration of how to use the parser generator Yacc in the The Unix Programming Environment book. As far as I know, NEURON is the only software out there that’s still using hoc, but that’s a story for another day.

Image Credits: Wikipedia User:ClickRick, Wikipedia User:Rees11.

Hva er nevrale nettverk? (Del 1)

Kunstige nevrale nettverk er en etterligning av nervecellene våre i hjernen. De skal brukes til å gjøre datamaskiner like flinke til å lære som mennesker og dyr. Dette gjøres ved å herme etter hvordan hjernen er bygd opp.

Nevrale nettverk er noe som har blitt forsket på helt siden 1940-tallet, men det er først de siste årene at bruken av disse har skutt fart. Årsaken er en kombinasjon av økt datakraft, behovet for å analysere enorme mengder informasjon på internett og at mer avanserte nettverk har blitt oppfunnet. Senest i fjor annonserte Facebook at de ved hjelp av nevrale nettverk hadde klart å lage et dataprogram som var bedre til å gjenkjenne ansikt i bilder enn et gjennomsnittlig menneske.

Hjernen består av mange forskjellige celler. De mest spennende er nervecellene som kan lede elektriske signaler over store avstander. Når signalene skal overføres mellom nervecellene dusjer de hverandre med kjemikalier som igjen danner elektriske signaler i mottakercellene.

I synapsene dusjer nervecellene hverandre med kjemikalier for å overføre signaler.

Det er kommunikasjonen mellom nervecellene vi forsøker å etterligne når vi lager kunstige nevrale nettverk. Det kan gjøres på mange forskjellige måter, avhengig av hvor mye informasjon vi er ute etter. I forskningen jeg gjør med kollegaene mine forsøker vi å få med veldig mange av detaljene i kommunikasjonen mellom nerveceller. Slik som hvilke ionekanaler som er avgjørende for å få overført de rette signalene. Og hva formen på nervecellene har å si for hvordan de fungerer. Når Facebook, Google og Microsoft prøver å skape kunstig intelligens er disse tingene derimot ikke fullt så viktige. Da er det eneste målet at nettverket som helhet gjør den samme jobben som nervecellene i hjernen. Nemlig at de er i stand til å lære – og huske det de har lært.Det betyr at kunstige nevrale nettverk kan være svært enkle i forhold til de vi finner i våre egne hjerner. En fordel med enkle nettverk er at datamaskinen kan gjøre beregninger mye raskere enn om vi tok med alle detaljene.I de enkleste modellene av nevrale nettverk ser vi på hver nervecelle som et punkt som kan få input fra andre nerveceller og sende signaler videre. Vi forenkler med andre ord det som egentlig er en svært detaljert struktur i bildet til venstre, til en veldig enkel modell uten detaljer:

VI forenkler de ellers så detaljerte nervecellene til punkter som kan motta og sende signaler.

Hver nervecelle kan ha flere koblinger til andre nerveceller og sammen danner disse koblingene et nettverk:

Nevralt nettverk
Et nettverk av forenklede nerveceller.

Nettverket over er bare koblet sammen lagvis: Det er tre lag i dette nettverket, og hvert lag er kun koblet til det neste. I hjernen er også nervecellene inndelt lagvis, men det finnes også tilbakekoblinger mellom lagene. Ved å kun tillate lagvise koblinger er det likevel mye enklere å trene opp nettverket på en datamaskin. Men det brukes også tilbakekoblinger i kunstige nettverk, spesielt til problemer der disse har en større nytte, slik som når man skal lære ett nettverk opp til å forstå ting som endres over tid.

Hva gjør de enkelte nervecellene?

Dette er det første i en serie av innlegg jeg kommer til å skrive om nevrale nettverk. Jeg har så langt bare sagt litt om hva ideen er bak nettverkene og litt kort om veien fra detaljerte nerveceller til punktene vi bruker i våre modeller. I de neste innleggene vil jeg si litt mer om hvordan vi tenker oss at disse enkle nervecellene skal fungere og hvordan dette kan hjelpe oss med å skape en kunstig intelligens og en datamaskin som kan lære av seg selv.

Nerveceller med toon-shader

Da var det på tide med et nytt Blender-prosjekt igjen. Etter å ha lest om mulighetene til å lage tegneserieaktige materialer bestemte jeg meg for å teste dette ut på en 3D-figur av en nervecelle:


Jeg har for øyeblikket ingen anelse om hva det kan brukes til, men det er alltid gøy å teste ut ulike effekter i Blender. Kanskje dukker den opp på en fremtidig poster.

Last ned Blender prosjektet her.

Ny CINPLA-logo som helgeprosjekt

Da jeg fikk jobben som stipendiat ble jeg med i en gruppe ved navn CINPLA. Gruppen er dannet for å kombinere eksperimenter og simuleringer innen hjerneforskning.

I helgen bestemte jeg meg for å teste ut en idé til en CINPLA-logo som vi har diskutert på kontoret den siste tiden:


Tanken bak logoen er at venstre del representerer den biologiske delen av arbeidet som gjøres i gruppa. Den høyre delen representerer simuleringer. Sistnevnte er den typen arbeid jeg selv er involvert i. Logoen skal brukes på plakater og i foredrag som holdes av personer i gruppa, i tillegg til nettsider og denslags.

Logoen er laget i Inkscape, som forøvrig er et herlig verktøy til denne typen grafisk arbeid. Det er fri programvare og fungerer utmerket til vektorgrafikk i Ubuntu.

Logoen kommer i flere formater på CINPLA sin presseside snart. Den skal bare ut på prøve der den vurderes av gruppa og et par andre først.


Når jeg var ferdig kunne jeg selvfølgelig ikke dy meg, og åpnet i samme slengen 3D-programmet Blender for å se om jeg kunne gjøre noe artig med logoen der. Ved bruk av SVG-importering kunne jeg laste inn det nye bildet jeg hadde laget, men jeg ville også se hvordan det ville være med en skikkelig modell av en hjerne. En slik modell fant jeg på, hvor A.M. Winkler har vært så grei at han har lagt ut resultatet av en MR-avbildning av en hjerne i åpne formater.

Jeg lastet denne 3D-modellen inn i Blender, plasserte ting på riktig plass og animerte kameraet slik at jeg endte opp med følgende video:

Jeg aner ikke hva vi skal bruke en slik video til, men kanskje CINPLA begynner med spillefilmer en dag. Da vil det sikkert passe fint å kjøre denne videoen som en intro.

Siden 3D-modellen har en er delt med en Creative Commons Attribution-ShareAlike 3.0 Unported License er naturlig nok denne videoen det også.

Choosing the right license for your code

I was pointed to John Hunter’s Why we should be using BSD and came to think about how I rather tend to advocate using the GPL license. Richard Stallman makes some very convincing arguments for why GPL is the better choice, but even though I like this reasoning, I do see why choosing a different license may be the right choice.

In this post, I will explain some of the differences between BSD and GPL and hopefully help you to choose for yourself. The target audience for this post are people in the scientific community, but it may also be useful to others.

So how do BSD and GPL differ? Below I have listed what I think is most important, but please remember that I am not a lawyer, and that there are other effects of licensing that may apply to you:

  • BSD is a free-for-all license that only requires attribution. Anyone using your code must give credit to you as the original author. Apart from that, they can do pretty much everything. They can change the code and sell programs based on it without asking you.
  • GPL too requires attribution. However, it also requires anyone distributing software based on your source code to make available any changes they make to it. They can still sell programs based on your code to others without asking, but if they were to do something really smart with your code, you can demand that they share the changes with you. And you too can sell software based on their changes.

Before moving on, I would like to clarify one important thing: Even though you are licensing your code as BSD or GPL, you can still sell software based on your code. You can even sell the rights to use your code with a different license. The licenses only apply to what other people can do with your code. You still keep all other rights.

In the below table, I have summarized the differences between the licenses by listing a few scenarios. “They” is referring to someone using your code in their own software projects:

You can sell software based on your own code. ✔   ✔  
They can sell software based on your code. ✔   ✔  
They can make changes to your code. ✔  
They have to give you credit for writing the original code. ✔  
They have to use the same license as you.
You can demand access to the changes they made to your code.
They can sell software based on your code without sharing their changes with you.
You can sell software based on the changes they made to your code. *

*Except if they also use a BSD license for their changes.

Some people call GPL “viral” because it “infects” any software using GPL licensed code. Any project using code with a GPL license will basically have to be GPL licensed too. This may be a showstopper for some companies that want to keep their own changes proprietary (secret/hidden) while still using your code. Note that this only applies if they distribute the software to others: If someone downloads your code and plays around with it on their own computer, they don’t have to care about the GPL license.

The viral effect is why Richard Stallman advocates the GPL license. He thinks that it benefits the open source community (and perhaps also the scientific community) that software using GPL code must be GPL too. In his opinion, companies have money to spend on developing new products or to buy access to other people’s source code. The open source community, on the other hand, does not necessarily have the same monetary resources, but has access to all the open source code out there. If an open source project is fine with the adhering to the GPL license, they can also use all other GPL licensed code .

Further, I think you should choose GPL for any code you think might be commercially viable for yourself in the future. Some people think the best option is to keep source code proprietary if you want to make money of it in the future. I think the exact opposite. If you are working on a project that you believe has some commercial value, why not let others find that commercial value for you? If someone starts selling software based on an improved version of your code, you can demand that they share the changes with you and start selling the software yourself.

However, there are many good arguments for using the BSD license too. Many open source projects are working with companies that require code not to be GPL licensed. One such project is Matplotlib, for which John Hunter is the original author. They have received contributions from companies like Enthought, which want to make sure they don’t have to license all their software as GPL because some code they use is from GPL projects. Such contributions are often significant for open source projects, and I can see why choosing BSD is a good middle way for them to keep the project open while still getting as much benefit from cooperation with companies.

Some also argue that BSD attracts more developers than GPL because some developers may be put off by a GPL license. However, I believe this works both ways. Some developers are also put off by non-GPL projects. I don’t think your licensing choice will change much in the number of developers that are attracted to your project. If anything, I would do some research and contact relevant companies and developers and ask if they would be willing to contribute to your project. You can always change the license as you go, but you should know that all code that has already been licensed as GPL or BSD will forever be licensed that way. It is non-retractable. Any future changes you make to your own code, however, can be put under a different license. You may even choose not to license your new code any more if you want to.

To conclude, when choosing between BSD and GPL, you should consider whether your main goal is to contribute to the open source community or to everyone in general. If you target only the open source community, GPL may be your best choice. If you want to contribute to everyone without further restrictions to the usage of your code, BSD may be your best bet.

Personally, I will continue using GPL as long as I think my code may turn into a commercially viable project or if I just don’t have a long-term plan yet. However, I will happily use BSD if I think the code can be contributed into a BSD licensed project such as Matplotlib.

There are also many other licenses out there. Have a look at the list made by the Open Source Initiative if you want to know more about other licenses.

Simuleringer av hjernen

For et par dager siden lå jeg våken i sengen og innså at dette var en av de nettene hvor jeg ikke kom til å få sove. Heldigvis sovner jeg alltids til slutt, men det går fort noen timer fra jeg legger meg til jeg faktisk sovner. Andre ganger sovner jeg umiddelbart, litt som en lysbryter som skrus av akkurat klokka 22:04.

Denne gangen ble jeg liggende å tenke på hva som gjorde at jeg ikke fikk sove. Er det på grunn av noe jeg leste rett før jeg la meg? Stresser jeg for mye? Eller er det fordi jeg minnet meg selv på at jeg noen ganger ikke får sove?

Hvorfor det er slik, fascinerer meg. Av egen erfaring vet jeg at det er stor variasjon i hvor fort vi mennesker sovner. Noen av mine venner forteller at de opplever det samme som meg fra tid til annen. Andre er alltid stuptrøtte når klokka passerer 22, og sovner på et blunk. Akkurat hva som gjør det, har jeg ikke funnet svar på ennå. Jeg skulle også gjerne visst hva som kunne fått meg til å sove når jeg stresser over at jeg må tidlig opp dagen etter.

På veien til å bli hjernefysiker

Grubling har sannsynligvis mye av skylden for at jeg ikke alltid sovner. Det er nok også fordi jeg grubler at jeg endte opp med å studere realfag: først nanoteknologi og så fysikk. Jeg har alltid forundret meg over hvorfor verden er som den er. Og jeg elsker å finne svar på det litt etter litt.

Jeg endte altså opp som fysiker. I hverdagen lager jeg dataprogrammer som simulerer verden. Det vil si, deler av den. Det er programmer som etterligner enkelte av de tingene vi omgir oss med. Slik som havbølger som slår inn mot land. Hvordan strømmen finner veien gjennom ledningen og blir til lys når du trykker på bryteren. Hvordan atomer bindes sammen til molekyler. Eller noe så enkelt som hvordan en ball blir kastet opp i luften og faller mot bakken litt lenger borte.

Nå har jeg kastet meg ut i et felt som jeg kan langt mindre om, og det er nevrovitenskap. Jeg begynte for omtrent to måneder siden i en stipendiatstilling der jeg skal studere hjernen sammen med et team av andre forskere. En del av oss skal lage dataprogrammer som etterligner nervecellene hjernen er bygget opp av, for å forstå bedre hvorfor den fungerer som den gjør.

Fordi hjernen består av milliarder av nerveceller, hver med koblinger til tusener av andre, blir vi i forskerteamet nødt til å konsentrere oss om bare en liten del av gangen. Akkurat som med simuleringene jeg har gjort tidligere. Enkelte ting kan vi likevel finne svar på bare ved å studere noen få nerveceller. For eksempel har mange kommet langt i å forstå hvordan vi lærer, hvorfor vi husker, hvordan vi glemmer, og hvordan vi sover. Alt bare ved å se på en brøkdel av hjernen.

Blogging om hjerneforskning

Hvorfor vi sover er derimot fortsatt svært omdiskutert, selv om vi vet at det finnes mange gode grunner til å få nok søvn. Blant annet er det viktig for å holde oss friske. Russell Foster er en hjerneforsker med et fantastisk foredrag på TED der han forklarer hvorfor vi bruker nærmere 32 år av livet på å sove. Hvis du, som jeg, stadig vekk sluntrer unna den optimale søvnlengden, anbefaler jeg å ta en titt på denne:

Akkurat hvorfor jeg selv noen ganger ikke får sove, må jeg nok likevel vente med å finne svar på. I mellomtiden har jeg planer om å lære en del spennende nye ting om hjernen, og hva vi kan forstå ved å etterligne den på en datamaskin.

På bare et par korte måneder har jeg kommet over utrolig mye kult, og jeg har lyst til å vise fram flere av de interessante og spennende tingene som foregår i hjernen. En del av det jeg lærer kommer jeg til å skrive om her fremover. I tillegg har jeg planer om å dele mer av det andre som også fascinerer og interesserer meg, slik som programmering, grafikk, politkk og fysikk.

PS: De av dere som (av en eller annen grunn) skulle savne mine tidligere blogginnlegg om hvordan du fikser obskure programmeringsfeilmeldinger, har disse nå havnet i en egen kategori.

Installing Sumatra 0.6-dev in Ubuntu 13.10

Sumatra is a great tool for reproducibility and provenance when you are running numerical simulations. Keeping your work reproducible ensures that you and others will be able to check your results at a later time, while provenance is to keep track of where your results came from in the first place.

I won’t get into details about using Sumatra in this post, as its documentation is quite good at describing that already.

Computational Physics members

If you are a member of the Computational Physics group and have access to the computing cluster, Smaug, you don’t have to install anything. Just ssh into our dedicated Sumatra machine, named bender, and run your jobs from there. In the future, all machines will support Sumatra, but for now, we only have one dedicated machine for this task:

ssh bender

On your own machine

Otherwise, or if you want it on your own machine, you will have to install it manually. This is done by cloning the repository and running the file. This will install Sumatra with all dependencies:

hg clone
cd sumatra
python install --user

Adding –user to the final command ensures that all packages are installed to your home directory, under .local/lib, so that  you don’t have to install with sudo privileges, and it makes it a bit easier to remove the package if you don’t want to keep it.

We also need the GitPython and PyYAML modules, which you may install using apt-get:

sudo apt-get install python-git python-yaml

And that’s it! You should now be able to run your projects with Sumatra.