EPD 6.1: MKL on Linux, Windows, & OSX

epd-6-1-long
There were several reasons we initially decided to include MKL, an extensively threaded, highly optimized library, in the Enthought Python Distribution. For one thing, we like that MKL detects the processing capability of the machine and then runs optimal algorithm for that hardware. Secondly, we knew that MKL would offer faster linear algebra routines than the ATLAS framework, previously used for EPD Linux and Windows, and Accelerate library, previously used for OSX.

We didn’t anticipate, however, just how dramatic that speed up would be. Our benchmarking tests document the astounding increases in processing speed that MKL lends to EPD.

In EPD 6.1, NumPy and SciPy are dynamically linked against the MKL linear algebra routines. This allows EPD users to seamlessly benefit from the highly optimized BLAS and LAPACK routines in the MKL. In addition, EPD 6.1 comes bundled with all of the MKL run-time libraries so that advanced users can take advantage (with ctypes) of even more of the MKL library such as fast Fourier transforms, trust-region optimization methods, sparse solvers, and vector math.

We’re really pleased with the optimizations MKL offers to our EPD users. Try out EPD 6.1 for yourself!

12 thoughts on “EPD 6.1: MKL on Linux, Windows, & OSX

  1. Pingback: Love the Mac

  2. avatarBaptiste

    It looks like the thing is proprietary, though. Is this the first time EPD includes proprietary code? Do you have a policy on this?

    Reply
    1. avatarTravis Oliphant

      Yes, the MKL is Intel’s proprietary package. Most of the packages inside of EPD are also available as open source software, but we will also include proprietary software that adds value to our customers.

      Reply
  3. avatarAmenity

    Just to elaborate on Travis’ explanation: While MKL is not open-source, the developer license we purchased allows EPD users to utilize it. See our FAQ for details.

    Reply
  4. avatarAndrew

    I am testing this and I get multithreaded behavior for the svd call, but it does not appear that the dot() function is multithreaded. Do I need to call some sort of blas function (xGEMM) more directly?

    Reply
  5. avatarAndrew

    Further testing shows that ipython has the expected multithreaded behavior for dot() both in Windows and Linux, so the problem of dot() lacking multithreading appears to be limited to some instances of EPDLab in Windows.

    Reply
  6. avatarMatt

    That’s funny because my windows version is multithreaded and gives fantastic performance, though I may have downloaded a third party precompiled numpy binary to get it to work with python 2.6.5.

    However my bleeding edge linux distro, Arch Linux won’t even load the MKL libraries (MKL 10.2.5). Another RHEL 5 machine loads MKL but nothing is multithreaded, including atlas.

    The only multithreaded linux dot() I’ve got working is numpy 1.3 , python 2.6 and a precompiled atlas binary on a fedora 11 machine.

    Reply
  7. Pingback: EPD 6.1: MKL on Linux, Windows, & OSX | Enthought Blog :: NetBoys – One Step Ahed

  8. avatarAndrew

    Well matt, I have not had a problem with multithreaded dot() on Linux yet, using Ubuntu and Fedora. I’m mainly sticking with Ubuntu now because Fedora’s SELinux is too annoying in the way it interacts with EPD.

    However, I can also say that the wonderful performance I have in Linux on the intel Xeon 5680 processors, is not replicated on the AMD Magny Cours Opteron 6174, which, although multithreaded, is a lot slower than I would have expected.

    I’ve chatted with Enthought about this and they don’t have a version of EPD linked to ACML (which is the AMD version of this sort of thing). I would expect Opteron+ACML to do even better than the intel offerings, but the compiler I have which could compile this sort of code (PGI) isn’t likely to be a lot of fun trying to get the entire EPD through.

    Reply
  9. avataralexander migdal

    I am working with large sparse matrices from scipy in anaconda package, using intel MKL. I noticed that MKL does not parallelize the sparse matrix algebra, contrary to claims. My tests showed that the runtimes for sparse matrix multiplication, for example, do not depend on the number of cores set in MKL. On the other hand, dense matrix multiplication significantly accelerates with increase of number of cores. My conclusion is that the scipy.sparse libraries are not really hooked up to MKL in anaconda package.
    Is it the same way with Enthought?

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>