Posts

Showing posts from July, 2010

Mathematica 7 review: buggy but fun!

Image
At only £195+VAT, the Mathematica 7 Home Edition is just too tempting as an executive toy but it still seems to be far too buggy to be taken seriously. After just a few hours of playing around, a variety of bugs have become apparent. Every Mathematica user fears the dreaded error box that marks the loss of all unsaved data: Fortunately, a really serious bug in the FFT routines of Mathematica 7.0.0 was fixed for the 7.0.1 release. This was a showstopper for customers of our time-frequency analysis add-on . The severity and ubiquity of this bug really highlights just how little quality assurance goes into Wolfram's software which, in turn, goes to show how a unimportant correctness is in the creation of commercially-successful software products, even if they are used in aerospace engineering ! The first bug is in the new support for parallelism in Mathematica. Although it is only supposed to handle 4 cores, it produces pages of errors when run on a machine with more cores such as...

F#unctional Londoners meetup lecture (28th July 2010)

Zach Bray of Trayport and Jon Harrop of Flying Frog Consultancy Ltd. will be presenting lectures at Skills Matter eXchange London (UK) at 6:30pm on Wednesday 28th July 2010. Many thanks to Carolyn Miller and Phil Trelford for organizing F#unctional Londoners Meetup Group , an excellent series of events!

Haskell's hash tables revisited: part 2

Our previous blog post contained several benchmark results comparing the new GHC 6.12.3 with F#. We have since discovered some other points of interest regarding this benchmark. Firstly, the results for Haskell rely on the use of a garbage collector that prevents parallelism. If the more modern multicore-friendly GC is used (by compiling with -threaded and running with +RTS -N8 ) then the time taken increases from 4.5s to 10.6s. This is over 2× slower than before and now over 13× slower than F#. Naturally, the F# was already using the multicore-capable .NET garbage collector so this was an unfair bias in favor of Haskell. Secondly, the Haskell code exploits an algorithmic optimization on the assumption that the keys are unique. This is often not the case in practice and, again, the F# code did not exploit such an assumption so this was another unfair bias in favor of Haskell. A fairer comparison may be obtained by changing from the insert function to the update function in the Hask...

Haskell's hash tables revisited

Update: We have since discovered that these results were biased towards Haskell. Mikhail Glushenkov recently announced the Haskell Platform 2010.2 RC for Windows. In particular, this is the first release to include a version of the Glasgow Haskell Compiler (6.12.3) that has the new garbage collector fix to address the performance problems Haskell programmers have been experiencing with mutable arrays of boxed values over the past 5 years, such as the spines of hash tables. Naturally, we couldn't resist benchmarking the new release to see if it lives up to the promise of decent hash table performance. Even though this installer for the Haskell Platform is just a release candidate, we found that it installed smoothly and ran correctly first time. First up, a repeat of our previous benchmark which inserted 10,000,000 bindings with int keys mapping to int values into an initially-empty hash table: GHC 6.12.1: 19.2s GHC 6.12.3: 4.48s F# .NET 4: 0.8s The new version of GHC is clear...

Mono 2.4 still leaking like a sieve

How fast are hash tables in Mono 2.4 compared to .NET 4? An excellent question but one which led us to another trivial program that leaks memory on Mono but not on .NET (we previously gave a simple stack implementation written in F# that leaked memory on Mono 2.2 ). We tried to use the following benchmark program to measure Mono's performance when filling a hash table from empty with float keys and values: for i in 1..10 do let t = System.Diagnostics.Stopwatch.StartNew() let m = System.Collections...

Animated Mathematica functions

Here's a fun web page from Wolfram Research that has animations for a bunch of Mathematica's built-in functions.

Book review: Mathematica Cookbook by Sal Mangano

Image
O'Reilly just published a new book, the Mathematica Cookbook , about Wolfram Research's flagship product. This book contains many interesting examples from various different disciplines. Most of these are derived from freely available examples written by other people (primarily from Wolfram Research's original Mathematica Book and also the excellent Wolfram Demonstrations Project ) but the author has simplified some of the programs to make them more accessible. However, the density of the information in this book is incredibly low. Most pages are filled with superfluous Mathematica output that is often not even described in the text: Dozens of triples on page 15. Page 58 lists hundreds of numbers but the text does not even describe their significance. Page 205 lists all of the words with a subset of the letters "thanksgiv". Page 226 is raw XML data. Pages 264-265 are solid code that renders a snowman with circles and some dots for snow (all in black and white). Pa...

Purely functional games

A recent blog post entitled " Follow up to functional programming doesn't work " recently caused a bit of a stir, encouraging Haskell programmers to dredge up the nearest things Haskell has to real computer games. A Super Mario clone , Frag (a reimplementation of part of Quake), a Gradius clone , 4Blocks and a game that blows all of the others away called Bloxors: This beautiful little OpenGL-based puzzle game weighs in at just 613 lines of purely functional code (many of the other games use unsafe* functions to introduce uncontrolled side effects). This is also one of the few programs that cabal install actually works on. Check it out here !

Debunking the 100× GPU vs. CPU myth

Intel recently published a paper with no less than 12 authors called Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU where they criticize the huge performance discrepancies cited by researchers publishing about General-Purpose GPU (GPGPU) programming in the context of what they call "throughput computing". We have also noticed bad science in this domain before. We tried to reproduce the incredible results of one paper, with a view to entering this market ourselves, only to discover they had used the reference implementation of LAPACK instead of the vendor tuned implementation for their CPU that was 10× faster. Like Intel, we found that the performance advantage of a GPU was relatively modest (2.5×) given the enormous costs and liabilities of using a GPU for number crunching. However, we are fortunate to be able to simply dismiss fantastical results as irrelevant propaganda but Intel are presumably feeling the pinch as misinformed cus...