Tag Archives: HPC

Python at Inflection Point in HPC

Authors: Kurt Smith, Robert Grant, and Lauren Johnson

We attended SuperComputing 2013, held November 17-22 in Denver, and saw huge interest around Python. There were several Python related events, including the “Python in HPC” tutorial (Monday), the Python BoF (Tuesday), and a “Python for HPC” workshop held in parallel with the tutorial on Monday. But we had some of our best conversations on the trade show floor.

Python Buzz on the Floor

The Enthought booth had a prominent “Python for HPC: High Productivity Computing” headline, and we looped videos of our parallelized 2D Julia set rendering GUI (video below).  The parallelization used Cython’s OpenMP functionality, came in at around 200 lines of code, and generated lots of discussions.  We also used a laptop to display an animated 3D Julia set rendered in Mayavi and to demo Canopy.

Many people came up to us after seeing our banner and video and asked “I use Python a little bit, but never in HPC – what can you tell me?”  We spoke with hundreds of people and had lots of good conversations.

It really seems like Python has reached an inflection point in HPC.

Python in HPC Tutorial, Monday

Kurt Smith presented a 1/4 day section on Cython, which was a shortened version of what he presented at SciPy 2013.  In addition, Andy Terrel presented “Introduction to Python”; Aron Ahmadia presented “Scaling Python with MPI”; and Travis Oliphant presented “Python and Big Data”. You can find all the material on the PyHPC.org website.

The tutorial was generally well attended: about 100–130 people.  A strong majority of attendees were already programming in Python, with about half using Python in a performance-critical area and perhaps 10% running Python on supercomputers or clusters directly.

In the Cython section of the tutorial, Kurt went into more detail on how to use OpenMP with Cython, which was of interest to many based on questions during the presentation. For the exercises, students were given temporary accounts on  Stampede (TACC’s latest state-of-the-art supercomputer) to help ensure everyone was able to get their exercise environment working.

Andy’s section of the day went well, covering the basics of using Python.  Aron’s section was good for establishing that Python+MPI4Py can scale to ~65,000 nodes on massive supercomputers, and also for adressing people’s concerns regarding the import challenge.

Python in HPC workshop, Monday

There was a day-long workshop of presentations on “Python in HPC” which ran in parallel with the “Python for HPC” tutorial. Of particular interest were the talks on “Doubling the performance of NumPy” and “Bohrium: Unmodified NumPy code on CPU, GPU, and Cluster“.

Python for High Performance and Scientific Computing BoF, Tuesday

Andy Terrel, William Scullin, and Andreas Schreiber organized a Birds-of-a-Feather session on Python, which had about 150 attendees (many thanks to all three for organizing a great session!).  Kurt gave a lightning talk on Enthought’s SBIR work.  The other talks focused on applications of Python in HPC settings, as well as IPython notebooks on the basics of the Navier-Stokes equations.

It was great to see so much interest in Python for HPC!

Enthought awarded $1M DOE SBIR grant to develop open-source Python HPC framework

We are excited to announce that Enthought is undertaking a multi-year project to bring the strengths of NumPy to high-performance distributed computing.  The goal is to provide a more intuitive and user-friendly interface to both distributed array computing and to high-performance parallel libraries.  We will release the project as open source, providing another tool in our toolbox to help with data processing, modeling, and simulation capabilities in the realm of big data.  The project is funded under a Phase II grant from the DOE SBIR program [0] [1], and is headed by Kurt Smith.

The project will develop three packages designed to work in concert to provide a high-performance computing framework.  To maximize interoperability and extensibility, the project will design a distributed array protocol akin to the Python PEP-3118 buffer protocol [2], making it possible for other libraries and projects to easily interoperate with Odin and PyTrilinos distributed data structures. The protocol will allow interoperability with the Global Arrays and the Global Arrays in NumPy (GAIN) projects based out of Pacific Northwest National Laboratory (PNNL). Computational scientist Jeff Daily, who leads GAIN development at PNNL, will help in this effort.

The three components are described in more detail below.

Optimized Distributed NumPy (ODIN)

ODIN provides a NumPy-like interface for distributed array computations.  It provides

  • distributed parallel computing on array expressions;

  • specification of an array’s domain decomposition, whether for processing or for storage across files, with sensible defaults;

  • specification of the processes involved in specific array computations;

  • features for specifying the locality of computations, whether global or local;

  • support for out-of-core computations;

  • interoperability with existing NumPy-based packages.

Expressions involving ODIN arrays will allow users to perform sophisticated array computations in a distributed fashion, including basic array computations, array slicing and fancy-indexing computations, finite-difference-style computations, and several more.  ODIN’s road map includes array expression analysis and loop fusion for optimizations of distributed computations.   ODIN will provide built-in capabilities for distributed UFunc calculations as well as reduction and accumulation-type computations.  Odin is designed to be extensible and adaptable to existing libraries, and will allow domain experts to make their distributed algorithms easily available to a much wider audience based on a common platform.  The package will build on existing technologies and takes inspiration from several distributed array libraries and languages already in existence, including Chapel, X10, Fortress, HP-Fortran, and Julia.  Odin will interoperate with the Trilinos suite of HPC solvers via PyTrilinos, and will provide a high-level interface to make Trilinos and PyTrilinos easier to use.

ODIN will be tested on the Texas Advanced Computing Center’s Stampede supercomputer, and scaling tests will be run on Stampede’s Intel Phi accelerators.

PyTrilinos improvements and enhancements

Trilinos is a suite of dozens of HPC packages that provide access to state-of-the-art distributed solvers, and PyTrilinos is the Python interface to several of the Trilinos packages.  The Trilinos packages, developed primarily at Sandia National Laboratories, allow scientists to solve partial differential equations and large linear, nonlinear, and optimization problems in parallel, from desktops to distributed clusters to supercomputers, with active research on modern architectures such as GPUs.  Bill Spotz, senior research scientist at Sandia, will lead the PyTrilinos portion of the project to improve and continue to expand the PyTrilinos interfaces, making Trilinos easier to use.

Seamless

Seamless provides functionality to speed up Python via JIT compilation and makes integration between Python and other languages nearly effortless. Based on LLVM, Seamless uses LLVM’s introspection capabilities to easily wrap existing C, C++ (and eventually Fortran) libraries while minimizing code duplication, combining many of the best features of Cython, Ctypes, SWIG, and PyPy.

We are very excited to have the opportunity to work on this Python HPC framework, and look forward to working with the Scientific Python community to move NumPy into the next age of distributed scientific computing.  We will be updating Enthought’s website with project progress and updates.  We would like to thank the Department of Energy’s SBIR program for the opportunity to develop these packages, and the collaborators and industry partners whose support made this possible.

[0] http://science.energy.gov/sbir/awards/

[1] http://science.energy.gov/~/media/sbir/excel/2013_Phase_II_Release_1.xlsx

[2] http://www.python.org/dev/peps/pep-3118/

EuroScipy 2012

EuroScipy 2012 starts tomorrow! Four days of exciting tutorials and talks. The conference is hosted in Brussels at ULB (which you probably know if you went to FOSDEM).

The first two days are dedicated to a great set of tutorials. The introductory track should please any new data analyst starting with Python:

  • array manipulation with NumPy
  • plotting with Matplotlib
  • introduction to scientific computing with Scipy.

In the advanced track, HPC and parallel computing are the main focus but tutorials also offer:

  • advanced numpy and scipy
  • time series data analysis with Pandas
  • visualisation
  • packaging and scientific software development insights.

 

Last but not least, the European Enthought team will offer:

  • a tutorial on Enaml, a new library that makes GUI programming fun
  • a tutorial on how to write robust scientific code with testing
  • a tutorial about Bento, a pythonic packaging system for Python software

Plan for an exciting weekend as well with various talks covering finance to geophysics to biology. Don’t forget to come for the keynote sessions with David Beazley on Saturday and Eric Jones, Enthought’s CEO, on Sunday!

 

See you in Brussels!