## 19 January 2013

### Installing Agda on Windows

I was able to get Agda installed and working on windows. The versions involved:

• Agda (from darcs, it's roughly a 2.3.3 prelease)
• ghc-7.6.1 32-bit
• gnu emacs 24.2
• Windows 7 64-bit
• Deja Vu font

1. First, I installed ghc from here.
2. The next step is the hardest one. You need a copy of darcs. I already had a copy of darcs from a previous ghc install. If you can't find a windows binary of darcs that works for you, try these patches: http://lists.osuosl.org/pipermail/darcs-users/2012-December/026733.html
3. Fetching Agda and the standard library is quite slow so I recommend starting that now. The commands are:
4. Next you'll want to get a recent copy of emacs. I installed emacs 24.2 because it has a nice built-in package manager to make it easier to install extensions. If you use emacs 24.2 you'll need to patch your Agda installation. You can get emacs from here: http://ftp.gnu.org/gnu/emacs/windows/

Just expand that archive somewhere and add runemacs to your start menu. We'll continue with the configuration of emacs in a later step.
5. Finding a good monospace font with unicode glyphs is not easy and I recommend Deja Vu. It's probably missing some glyphs but I haven't run into any yet. You can get it here: http://dejavu-fonts.org/wiki/Download

I expanded the tarball using 7-zip and then copied the font files into the font folder in Windows. You can get there by going to Start -> Control Panel -> Fonts.
6. Once the download of Agda finishes, go into that directory for the build. I created a patch on this ticket that fixes unicode support. If that hasn't been applied yet, you'll need to download the patch and apply it yourself. You can try to apply it in either case and darcs will simply ignore it if it's there.

To start the build I recommend using cabal-dev, and the command would be:
• cabal-dev install --prefix=$HOME/AppData/Roaming/cabal/ If you use plain cabal, it would simply be this command: • cabal install 7. Once the build finishes, it's time to configure emacs. Use the agda-mode command to start the configuration: • agda-mode setup Fire up emacs and open ~/.emacs. Mine looks something like this: You can see that I've configured my default font to be Deja Vu. You can set this via the menus, or just copy what I have. I've also got a bit in there that makes sure everything is setup to use utf8. I think that is optional. You'll want to make sure you edit line 8 so that agda can find the standard library code that you downloaded. 8. Reload your ~/.emacs config (I find it easiest to just restart emacs). Put this sample code into Foo.agda and try to load it with C-c C-l. If everything is working it should produce an error message in the AgdaInfo buffer with correct looking unicode symbols: If you get stuck try looking around the Agda wiki for pointers. I've found that most of the documentation is hanging off this page: http://wiki.portal.chalmers.se/agda/agda.php?n=Main.Documentation ## 02 December 2012 ### Haskell Community Project Ideas The recent blog posts by FP Complete employees and cabal shortcomings has spawned a fair bit of discussion about community direction and infrastructure projects. On top of this, many people are excited about 24 pull requests and 24 days of Hackage: I thought I would throw some project ideas into the mix: ## TUI environment for ghci We recently gain the ability to colorize and pretty print the results in ghci, but we can do so much more. I don't think we need a full graphical user interface on top of ghci. I think we could accomplish a lot using a rich text user interface (TUI) instead. The idea I have is to use a library such as vty-ui to build a multipane interactive system on top of ghci. Imagine having a special pane for the ghci debugger, a pane for looking at the memory requirements and structure of types/values (ala vacuum). It could also support colorizing the input as you type expressions (and parenthesis matching). Another pane could allow for module browsing and viewing haddocks. ## Haddock 2.0 The basic idea here is to replace the haddock internals with pandoc. We could take this a step further and allow people to specify which type of markup they want to use for haddocks. The particular markup style would then be based on a directive such as: -- | haddock-markup: markdown I'm not sure what the default should be, perhaps a "haddock to pandoc" could be provided as the default. ## Community Edition of the Haskell Editor There has been a lot of talk about Haskell IDEs lately, but for many of us editors such as Emacs and Vim are working well. Maybe it's time for a community maintained installer for plugins for the major editors? This package would bundle popular plugins (including ghc-mod) and provide reasonable defaults as well as documentation about using the plugins. The idea is to make it easier for Haskell experts and Haskell beginners to get up and running with a nice editor environment for Haskell. I envision such a package being complimentary to the Haskell Platform (or perhaps even included in it). ## Social Hackage Hackage 2 is getting closer to deployment by the day. It's time to start thinking about how to use the new features for a "Social Hackage". By which I mean, a hackage that distributes responsibility and provides clues as to the quality, maturity, and community opinion of packages and maintainers. Features such as upvoting, tagging, and comments can be used together to help people quickly and easily assess if a package is right for their project needs. I doubt these features will be properly tuned out of the box. Now is the time to get involved with Hackage 2 to help with this effort. Also, now is a good time to help Duncan et al get the new hackage tested and running. ## More Platform Support Heard that Simon Marlow is leaving MSR? Want to get involved with GHC development? How about being the champion for a new compiler backend? Here are some platforms that I wish had better support: • .NET • JVM • Web Browser (JavaScript or Google Native Client). Perhaps Fay will fill this role in the future. • ARM, it's been ported but we need more testers • iOS ## Good GUI libraries My highest priority here is that the libraries be easy and painless to install on the big three platforms (Windows, OSX, Linux). Improving GTK2HS and wxHaskell is a start, but I suspect this effort will require a major attempt from scratch. ## Improving Window Support Improving the infrastructure such that developing for windows is easier. There is a lot to do. Here is a starter list: • Port more libraries. Any thing that uses the FFI is likely to not build on windows. • Complete husyhox so that windows devs have a posix shell environment alternative to cygwin • Port GHC and the RTS to work with the "native windows toolchain" (aka Visual Studio) ## Hackage/github/travis-ci integration Imagine being able to create a tag ("hackage v0.1.1.3") in your github repository and the resulting push fires off a travis-ci build. If the build passes the tag is uploaded to hackage using the version number from the tag. This would make it so that your good releases automatically go up on hackage. ## 30 March 2012 ### tmux + ssh-agent Lately I've been using tmux on remote servers. This allows me to maintain task-specific sessions on remote servers, regardless of what computer I work from or if my internet connection get severed. The biggest wrinkle in my setup has been ssh-agent forwarding. When I reconnect the remote side no longer knows where to find SSH_AUTH_SOCK. I found a solution. Step 1. On the remote host, create$HOME/.ssh/rc, make it executable, and add the following contents:

Don't forget: chmod 755 $HOME/.ssh/rc Step 2. Add these two lines to your .tmux.conf: A few things kept me from getting this working straight away. You have to disable the default behavior of updating SSH_SOCK_AUTH (that's what the first line does), then you have to set SSH_SOCK_AUTH to point at the symlink created by the script in step 1. Edit: I mistakenly thought previously that you have to use an absolute path to the symlink because tmux won't expand "$HOME"or "~". Turns out, if you don't quote the path then tmux can do the right thing with ~.

## 23 March 2012

### Life at Galois

At Galois, we believe in seeking out and solving important problems to make the world a better place.  This philosophy runs through everything we do. From internal interactions on an individual level to the interesting technical challenges we take on to the way we orient to the external world.

We hire the best people, we help them to better understand their own passions, and then together we take on important problems.  It just so happens that we have skilled software engineers with a passion for functional programming who take on challenges in crypto, critical systems, secure networking, and so on.

For me, the very compelling reasons to work at Galois are: a strong cultural emphasis on freedom to choose your roles within the organization including seeking external funding to work on whatever you want; strong commitment to open source, and; a cultural emphasis on functional programming and formal methods.

You may have seen the recent article by an ex-google employee explaining why he left Google. One of the themes in the post is that the cultural emphasis, what Google believes, changed. What a organization believes has a deep and profound impact on how you spend your time at the organization, who the organization attracts, and what impact the organization will have in the world.

We have no shortage of interesting problems to tackle in our quest to make the world a better place. If you'd like to join our cause, we're always accepting applications. At the moment, we're especially interested in hearing from potential project leads and principal investigators, but even if you don't fit those roles we may still have room for you. We are also interested in applications for internships.

## 05 February 2012

### Proposal: Community wide build infrastructure

Status Quo
As Haskellers we live in a world where our tools just keep getting better. See for example the GHC release notes that contains gems like this: Eq and Show are no longer required for Num.  GHC releases break backwards compatibility so that they can bring us really valuable updates and improvements.

It's not just GHC that is a moving target.  Everything in the Haskell community is improving and moving.  Hackage gets tons of new packages and updated packages each week.  Updates to libraries on Hackage tend to follow in the precedent set by GHC. Breaking API changes are common, and always well intentioned.

The package versioning policy (PVP) helps by allowing us to express our package dependencies in terms of what we expect to be a breaking change and what is meant to be a non-breaking change. Even so, accidental API breakages creep in and some authors fail to follow the PVP due to ignorance or choice.

Problem
The downside to all the wonderful flux is that it's hard to keep your code building in the varied environments that it ends up in. Fixing a deprecation warning on one version of GHC puts you at risk of not building on a previous, but important, release of GHC.  For example, in 7.4.1, mkTyCon is now deprecated in favor of the recently added mkTyCon3.

Build with -Werror -Wall and you're even more likely to run into problems. Actually, I highly recommend disabling -Werror before posting to hackage. Feel free to use it for dev builds.

How can each of us be sure that when we release to hackage that our source builds on the most machines possible?  How can we as a community make sure that our software is buildable with minimum hassle?

Proposal
I think the only real solution for this problem is testing. I need to try my software in the different environments that I want to support. At a high level, that's Windows, OSX, and Linux.  At a more detailed level that includes testing on multiple versions of ghc and trying out different versions of the dependencies from hackage and comparing against my testsuite. That's exactly how we do it at work and it works well.

I think it's time for a community visible build service. Apache offers this for projects under their care. Details about their service are found here.

My current thinking is to combine Hudson or Jenkins, possibly with some form of sandboxing such as sydbox. Do sandboxing tools exist for Windows and OSX? Another approach to sandboxing is to clone a virtual machine and discard it after the build finishes or a timeout occurs.

A completely different approach is to trust the community to not abuse the servers and let anyone apply for an account, with new user vetting. I think this is my preferred stance. If we make it so build machines are disposable and easy to redeploy from scratch, resource usage is monitored or capped per-request, and community members are moderating use, then I believe it should be possible to use the honor system.

Each build machine would have multiple versions of GHC installed. Additional extra-libraries could be petitioned for so that lots of C bindings would be buildable.

Users need to be able to submit projects, and their dependencies, for build testing before pushing the result to hackage.

What else does it need to do?

Call for Volunteers
I could build a private set of build machines for my own use and maintain them, but I don't think I'm alone in wanting this. Who is with me?

## 18 January 2012

### Keeping the "science" in Software Development

If you already feel that math is a science you can safely skip the first section.  What follows are my musings on what it means to be "correct", especially in software.  I pose a challenge at the end for practitioners and researchers.

Math is science too
According to Wikipedia the scientific method is:
Scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.[1] To be termed scientific, a method of inquiry must be based on gathering empirical andmeasurable evidence subject to specific principles of reasoning.[2]
At first glance it may appear that mathematics doesn't follow the scientific method above. The mathematics that you're most likely familiar with is a collection of definitions and the facts, lemmas and theorems, about those definitions that result from applying logical deduction.  You might notice that if you reason that way, then textbook applications of newton's three laws do not follow the scientific method either.

You might argue that in this case the applications of newton's laws is now mathematics and no longer science.  I would agree that it is mathematics and science.  Let's see if I can convince you.

By the time you see the definitions and corresponding theorems in a math text they have been reduced to facts.  The trial and error, as well as correction and integration of previous knowledge, have already happened.  The same is true of a textbook that shows you how to apply newton's three laws.  I recommend taking a moment to familiarize yourself with Lockhart's lament.

Math only seems to not be a science because of our relationship to learning it.  Most of us get taught math as facts.  If you've taken a proof based course you may have had the experience of searching for proofs.  There is a lot of trial and error involved.  Mathematics as a field may have already integrated previous knowledge, but if you are taking a course chances are you have not.  So you employ trial and error as well as hypothesis refinement to build logical deductions to establish what your intuition, or pen and paper examples, says in true.

The set of assumptions that you choose can make or break a result.  Even though mathematical results are timeless, they still bend to the sway of assumptions.  This is exactly what happens to empirical scientists too.  They think they know something, or have the right model, but later it turns out they reasoned from a flawed assumption and suddenly new data transform their field.

Evidence vs Proof
If you've studied proofs in different logical settings you may have noticed that what constitutes a proof depends on what you seek to gain from the proof.  See for example classical logic versus constructive logic.

In a court case or a forensics study, people look for evidence or witnesses to give data points in support of a particular position within a larger case (or claim).

Think about that for a minute.  A claim is made, then people try to establish it or refute it.  Often times, the assertion needs only be accepted or rejected by a "reasonable" skeptic that is willing to accept partial proof.  Such a skeptic might be a jury or peers reviewing a publication for acceptance in a journal.  Because we used partial proof new information could turn up later that reverses our understanding.

At the other extreme we have mathematical proofs, which are composed of logical statements following from assumptions.  Proofs are timeless.  If we get our logic right and we're clear about our assumptions then the result will stand forever.  I say "forever", but in reality, proofs can be found to be incomplete, insufficiently rigorous, or using the wrong logic at a later date.  Sometimes years after the arguments are accepted.

It seems that the difference here isn't in whether the resulting "facts" are immutable but rather the strength with which we intend to make the claim.  In the forensics case, we are making a claim to the best of our incomplete knowledge about what happened.  In the case of mathematical proof, we are making a claim that should be true forever.

In my opinion, the difference between "evidence" and "proof" exists to quantify the strength of our assertion that the claim or conjecture is true.  Claims can roughly be placed on a continuum from weak to strong.  Some claims, such as "a human authored this blog post", are weak in nature and need little evidence.  Other claims, such as "the author of this blog post can see through walls", is a hard to believe as there is good evidence that humans can't see through walls. To prove the second claim I would hope that you demand really strong "evidence".  Perhaps in the form of a demonstration, a scientific model for how I can do it, and validation from peer review.

Due to the continuum for strength of evidence described above, I consider proofs to be a form of evidence.  Proofs, in the mathematical sense, being the strongest form of evidence.  Claims made by politicians would be on the extreme other end of my continuum.

The strength of the evidence for the claim has practical applications too.  I wouldn't engineer a shuttle to the moon based on arguments built by politicians.  I would base my engineering on mathematical proofs as getting to the moon requires a high degree of correctness.

So then, what level of evidence is right if I need to be correct?

Evidence Based Correctness
Now we're really at the heart of the matter.  Sometimes it just doesn't matter if we're correct or not, for example when arguing with a spouse.  Other times it matters deeply, like when we want to engineer a shuttle to the moon.

While mathematics is obsessed with being unequivocally true, other branches of science have been trying to deepen their understanding while accepting a greater risk of being wrong.  This results in a different level of evidence required for progress.

In other words, you have to figure out how important it is to you (or your application) that you know you're correct versus it being reasonable that you are correct.  I like continuums and this is no exception.  I believe that you can build a continuum of how strongly you believe things.  This continuum is a natural dual to the continuum for strength of evidence.

In other words, the bolder your claim is, the stronger the evidence you need.  Unfortunately, no one has a magical formula for knowing when evidence is strong enough.  If we did science would become more precise, court would become more cut and dry, and mathematicians could relax and quantify results in terms of proof strength.

Now wait a second.  Quantify results in terms of proof strength?  That sounds like statistical reasoning doesn't it?  Statisticians are able to cobble together data points and talk about the strength of that evidence in terms of supporting a particular hypothesis.

In software development, testing is probably the cheapest way to have some evidence that your program is correct.  Testing is almost never enough to have a proof of correctness though.  On the other end of the spectrum we have theorem proving, which is known to be costly in terms of human effort.

My point is this, you don't have to force yourself into a staunch mathematical camp to have "correctness" in your software development.  Nor do you have to give up on correctness in order to fit into a budget.  It's not a dichotomy where you are forced to give up on either truth or practicality.  Testing is not "wrong" and proofs are not "required".

What is required is having a level of evidence that is strong enough to support the level of belief we need for our chosen level of correctness.  Currently we have to carefully evaluate where on the continuum of correctness our result rests and then pick a commiserate level of evidence.  You probably do this already intuitively to some degree.  My challenge to you is to do this explicitly and consciously.

Furthermore, can we take the precedent set by the statisticians and find a way to quantify strength of evidence?  Can testing and theorem proving be unified on a continuum by this measure?  Is there a way to look at a successful run of a test suite, that may or may not include theorem proving, and quantify the strength of that evidence that our program is correct?

## 23 September 2011

### Galois offers Haskell course: The Tao of Functional Programming

This year Galois will be offering a course in professional Haskell development. Details on our main website: http://corp.galois.com/haskell-course