Category Archives: Conferences

Enthought Presents the Canopy Platform at the 2017 American Institute of Chemical Engineers (AIChE) Spring Meeting

by: Tim Diller, Product Manager and Scientific Software Developer, Enthought

Last week I attended the AIChE (American Institute of Chemical Engineers) Spring Meeting in San Antonio, Texas. It was a great time of year to visit this cultural gem deep in the heart of Texas (and just down the road from our Austin offices), with plenty of good food, sights and sounds to take in on top of the conference and its sessions.

The AIChE Spring Meeting focuses on applications of chemical engineering in industry, and Enthought was invited to present a poster and deliver a “vendor perspective” talk on the Canopy Platform for Process Monitoring and Optimization as part of the “Big Data Analytics” track. This was my first time at AIChE, so some of the names were new, but in a lot of ways it felt very similar to many other engineering conferences I have participated in over the years (for instance, ASME (American Society of Mechanical Engineers), SAE (Society of Automotive Engineers), etc.).

This event underscored that regardless of industry, engineers are bringing the same kinds of practical ingenuity to bear on similar kinds of problems, and with the cost of data acquisition and storage plummeting in the last decade, many engineers are now sitting on more data than they know how to effectively handle.

What exactly is “big data”? Does it really matter for solving hard engineering problems?

One theme that came up time and again in the “Big Data Analytics” sessions Enthought participated in was what exactly “big data” is. In many circles, a good working definition of what makes data “big” is that it exceeds the size of the physical RAM on the machine doing the computation, so that something other than simply loading the data into memory has to be done to make meaningful computations, and thus a working definition of some tens of GB delimits “big” data from “small”.

For others, and many at the conference indeed, a more mundane definition of “big” means that the data set doesn’t fit within the row or column limits of a Microsoft Excel Worksheet.

But the question of whether your data is “big” is really a moot one as far as we at Enthought are concerned; really, being “big” just adds complexity to an already hard problem, and the kind of complexity is an implementation detail dependent on the details of the problem at hand.

And that relates to the central message of my talk, which was that an analytics platform (in this case I was talking about our Canopy Platform) should abstract away the tedious complexities, and help an expert get to the heart of the hard problem at hand.

At AIChE, the “hard problems” at hand seemed invariably to involve one or both of two things: (1) increasing safety/reliability, and (2) increasing plant output.

To solve these problems, two general kinds of activity were on display: different pattern recognition algorithms and tools, and modeling, typically through some kind of regression-based approach. Both of these things are straightforward in the Canopy Platform.

The Canopy Platform is a collection of related technologies that work together in an integrated way to support the scientist/analyst/engineer.

What is the Canopy Platform?

If you’re using Python for science or engineering, you have probably used or heard of Canopy, Enthought’s Python-based data analytics application offering an integrated code editor and interactive command prompt, package manager, documentation browser, debugger, variable browser, data import tool, and lots of hidden features like support for many kinds of proxy systems that work behind the scenes to make a seamless work environment in enterprise settings.

However, this is just one part of the Canopy Platform. Over the years, Enthought has been building other components and related technologies that work together in an integrated way to support the engineer/analyst/scientist solving hard problems.

At the center of the this is the Enthought Python Distribution, with runtime interpreters for Python 2.7 and 3.x and over 450 pre-built Python packages for scientific computing, including tools for machine learning and the kind of regression modeling that was shown in some of the other presentations in the Big Data sessions. Other components of the Canopy Platform include interface modules for Excel (PyXLL) and for National Instruments’ LabView software (Python Integration Toolkit for LabVIEW), among others.

A key component of our Canopy Platform is our Deployment Server, which simplifies the tricky tasks of deploying proprietary applications and packages or creating customized, reproducible Python environments inside an organization, especially behind a firewall or an air-gapped network.

Finally, (and this is what we were really showing off at the AIChE Big Data Analytics session) there are the Data Catalog and the Cloud Compute layers within the Canopy Platform.

The Data Catalog provides an indexed interface to potentially heterogeneous data sources, making them available for search and query based on various kinds of metadata.

The Data Catalog provides an indexed interface to potentially heterogeneous data sources. These can range from a simple network directory with a collection of HDF5 files to a server hosting files with the Byzantine complexity of the IRIG 106 Ch. 10 Digital Recorder Standard used by US military test flight ranges. The nice thing about the Data Catalog is that it lets you query and select data based on computed metadata, for example “factory A, on Tuesdays when Ethylene output was below 10kg/hr”, or in a test flight data example “test flights involving a T-38 that exceeded 10,000 ft but stayed subsonic.”

With the Cloud Compute layer, an expert user can write code and test it locally on some subset of data from the Data Catalog. Then, when it is working to satisfaction, he or she can publish the code as a computational kernel to run on some other, larger subset of the data in the Data Catalog, using remote compute resources, which might be an HPC cluster or an Apache Spark server. That kernel is then available to other users in the organization, who do not have to understand the algorithm to run it on other data queries.

In the demo below, I showed hooking up the Data Catalog to some historical factory data stored on a remote machine.

Data Catalog View The Data Catalog allows selection of subsets of the data set for inspection and ad hoc analysis. Here, three channels are compared using a time window set on the time series data shown on the top plot.

Then using a locally tested and developed compute kernel, I did a principal component analysis on the frequencies of the channel data for a subset of the data in the Data Catalog. Then I published the kernel and ran it on the entire data set using the remote compute resource.

After the compute kernel has been published and run on the entire data set, then the result explorer tool enables further interactions.

Ultimately, the Canopy Platform is for building and distributing applications that solve hard problems.  Some of the products we have built on the platform are available today (for instance, Canopy Geoscience and Virtual Core), others are in prototype stage or have been developed for other companies with proprietary components and are not publicly available.

It was exciting to participate in the Big Data Analytics track this year, to see what others are doing in this area, and to be a part of many interesting and fruitful discussions. Thanks to Ivan Castillo and Chris Reed at Dow for arranging our participation.

AAPG 2016 Conference Technical Presentation: Unlocking Whole Core CT Data for Advanced Description and Analysis

Microscale Imaging for Unconventional Plays Track Technical Presentation:

Unlocking Whole Core CT Data for Advanced Description and Analysis

Brendon Hall, Geoscience Applications Engineer, EnthoughtAmerican Association of Petroleum Geophysicists (AAPG)
2016 Annual Convention and Exposition Technical Presentation
Tuesday June 21st at 4:15 PM, Hall B, Room 2, BMO Centre, Calgary

Presented by: Brendon Hall, Geoscience Applications Engineer, Enthought, and Andrew Govert, Geologist, Cimarex Energy

PRESENTATION ABSTRACT:

It has become an industry standard for whole-core X-ray computed tomography (CT) scans to be collected over cored intervals. The resulting data is typically presented as static 2D images, video scans, and as 1D density curves.

CT scan of core pre- and post-processing

CT scans of cores before and after processing to remove artifacts and normalize features.

However, the CT volume is a rich data set of compositional and textural information that can be incorporated into core description and analysis workflows. In order to access this information the raw CT data initially has to be processed to remove artifacts such as the aluminum tubing, wax casing and mud filtrate. CT scanning effects such as beam hardening are also accounted for. The resulting data is combined into contiguous volume of CT intensity values which can be directly calibrated to plug bulk density.

Continue reading

Enthought’s Prabhu Ramachandran Announced as Winner of Kenneth Gonsalves Award 2014 at PyCon India

From PyCon India: Published / 25 Sep 2014

PSSI [Python Software Society of India] is happy to announce that Prabhu Ramachandran, faculty member of Department of Aerospace Engineering, IIT Bombay [and managing director of Enthought India] is the winner of Kenneth Gonsalves Award, 2014.

Enthought's Prabhu Ramachandran, winner of Kenneth Gonsalves Award 2014

Prabhu has been active in the Open source and Python community for close to 15 years. He co-founded the Chennai LUG in 1998. He is also well known as the author and lead developer of the award winning Mayavi and TVTK Python packages. He also maintains PySPH, an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations.

Prabhu is also Member of Board, Python Software Foundation since 2010 and is closely involved with the activities of FOSSEE and SciPy India. His research interests are primarily in particle methods and applied scientific computing.

Prabhu will be presented the Award on 27th Sep, the opening day of PyCon India 2014. PSSI and Team PyCon India would like to extend their hearty Congratulations to Prabhu for his achievement and wish him the very best for his future endeavours.

————————–

Congratulations Prabu, we’re honored to have you as part of the Enthought team!

Python at Inflection Point in HPC

Authors: Kurt Smith, Robert Grant, and Lauren Johnson

We attended SuperComputing 2013, held November 17-22 in Denver, and saw huge interest around Python. There were several Python related events, including the “Python in HPC” tutorial (Monday), the Python BoF (Tuesday), and a “Python for HPC” workshop held in parallel with the tutorial on Monday. But we had some of our best conversations on the trade show floor.

Python Buzz on the Floor

The Enthought booth had a prominent “Python for HPC: High Productivity Computing” headline, and we looped videos of our parallelized 2D Julia set rendering GUI (video below).  The parallelization used Cython’s OpenMP functionality, came in at around 200 lines of code, and generated lots of discussions.  We also used a laptop to display an animated 3D Julia set rendered in Mayavi and to demo Canopy.

Many people came up to us after seeing our banner and video and asked “I use Python a little bit, but never in HPC – what can you tell me?”  We spoke with hundreds of people and had lots of good conversations.

It really seems like Python has reached an inflection point in HPC.

Python in HPC Tutorial, Monday

Kurt Smith presented a 1/4 day section on Cython, which was a shortened version of what he presented at SciPy 2013.  In addition, Andy Terrel presented “Introduction to Python”; Aron Ahmadia presented “Scaling Python with MPI”; and Travis Oliphant presented “Python and Big Data”. You can find all the material on the PyHPC.org website.

The tutorial was generally well attended: about 100–130 people.  A strong majority of attendees were already programming in Python, with about half using Python in a performance-critical area and perhaps 10% running Python on supercomputers or clusters directly.

In the Cython section of the tutorial, Kurt went into more detail on how to use OpenMP with Cython, which was of interest to many based on questions during the presentation. For the exercises, students were given temporary accounts on  Stampede (TACC’s latest state-of-the-art supercomputer) to help ensure everyone was able to get their exercise environment working.

Andy’s section of the day went well, covering the basics of using Python.  Aron’s section was good for establishing that Python+MPI4Py can scale to ~65,000 nodes on massive supercomputers, and also for adressing people’s concerns regarding the import challenge.

Python in HPC workshop, Monday

There was a day-long workshop of presentations on “Python in HPC” which ran in parallel with the “Python for HPC” tutorial. Of particular interest were the talks on “Doubling the performance of NumPy” and “Bohrium: Unmodified NumPy code on CPU, GPU, and Cluster“.

Python for High Performance and Scientific Computing BoF, Tuesday

Andy Terrel, William Scullin, and Andreas Schreiber organized a Birds-of-a-Feather session on Python, which had about 150 attendees (many thanks to all three for organizing a great session!).  Kurt gave a lightning talk on Enthought’s SBIR work.  The other talks focused on applications of Python in HPC settings, as well as IPython notebooks on the basics of the Navier-Stokes equations.

It was great to see so much interest in Python for HPC!

Raspberry Pi Sensor and Actuator Control

Author: Jack Minardi

I gave a talk at SciPy 2013 titled open('dev/real_world') Raspberry Pi Sensor and Actuator Control. You can find the video on youtube, the slides on google drive and I will summarize the content here.

Typically as a programmer you will work with data on disk, and if you are lucky you will draw pictures on the screen. This is in contrast to physical computing which allows you as a programmer to work with data sensed from the real world and with data sent to control devices that move in the real world.

Mars Rover

physical computing at work. (source)

Goal

Use a Raspberry Pi to read in accelerometer value and to control a servo motor.

Definitions

  • Raspberry Pi
    • Small $35 Linux computer with 2 USB ports, HDMI out, Ethernet, and most importantly…
  • GPIO Pins
    • General Purpose Input/Output Pins
    • This is the component that truly enables “physical computing”. You as a programmer can set the voltage high or low on each pin, which is how you will talk to actuators. You can also read what the voltage is currently on each pin. This is how sensors will talk back to you. It is important to note that each pin represents a binary state, you can only output a 0 or a 1, nothing in between.

In this article I will go over four basic Python projects to demonstrate the hardware capabilities of the Raspberry Pi. Those projects are:

  • Blink an LED.
  • Read a pot (potentiometer).
  • Stream data.
  • Control a servo.

Blink an LED.

An LED is a Light Emitting Diode. A diode is a circuit element that allows current to flow in one direction but not the other. Light emitting means … it emits light. Your typical LED needs current in the range of 10-30 mA and will drop about 2-3 volts. If you connect an LED directly to your Pi’s GPIO it will source much more than 30 mA and will probably fry your LED. To prevent this we have to put a resistor. If you want to do math you can calculate the appropriate resistance using the following equation:

R = (Vs - Vd) / I

But if you don’t want to do math then pick a resistor between 500-1500 ohms. Once you’ve gathered up all your circuit elements (LED and resistor), build this circuit on a breadboard:

LED Circuit

thats not so bad, is it?

The code is also pretty simple. But first you will need to install RPi.GPIO. (It might come preinstalled on your OS.)

import time
from itertools import cycle
import RPi.GPIO as io
io.setmode(io.BCM)
io.setup(12, io.OUT)
o = cycle([1, 0])
while True:
    io.output(12, o.next())
    time.sleep(0.5)

The important lines basically are:

io.setup(12, io.OUT)
io.output(12, 1)

These lines of code setup pin 12 as an output, and then output a 1 (3.3 volts). Run the above code connected to the circuit and you should see your LED blinking on and off every half second.


Read a pot.

A pot is short for potentiometer, which is a variable resistor. This is just a fancy word for knob. Basically by turning the knob you affect the resistance, which affects the voltage across the pot. (V = IR, remember?). Changing voltage relative to some physical value is how many sensors work, and this class of sensor is known as an analog sensor. Remember when I said the GPIO pins can only represent a binary state? We will have to call in the aide of some more silicon to convert that analog voltage value into a binary stream of bits our Pi can handle.

That chunk of silicon is refered to as an Analog-to-Digital Converter (ADC). The one I like is called MCP3008, it has 8 10-bit channels, meaning we can read 8 sensors values with a resolution of 1024 each (2^10). This will map our input voltage of 0 – 3.3 volts to an integer between 0 and 1023.

LED Circuit

I’ve turned the Pi into ephemeral yellow labels to simplify the diagram

To talk to the chip we will need a python package called spidev. For more information about the package and how it works with the MCP3008 check out this great blog post

With spidev installed and the circuit built, run the following program to read live sensor values and print them to stdout.

import spidev
import time
spi = spidev.SpiDev()
spi.open(0,0)
def readadc(adcnum):
    if not 0 <= adcnum <= 7:
        return -1
    r = spi.xfer2([1, (8+adcnum)<<4, 0])
    adcout = ((r[1] & 3) << 8) + r[2]
    return adcout
while True:
    val = readadc(0)
    print val
    time.sleep(0.5)

The most important parts are these two lines:

r = spi.xfer2([1, (8+adcnum)<<4, 0])
adcout = ((r[1] & 3) << 8) + r[2]

They send the read command and extract the relevant returned bits. See the blog post I linked above for more information on what is going on here.


Stream data.

To stream data over the wire we will be using the ØMQ networking library and implementing the REQUEST/REPLY pattern. ØMQ makes it super simple to set up a client and server in Python. The following is a complete working example.

Server

import zmq
context = zmq.Context()
socket = context.socket(
    zmq.REP)
socket.bind('tcp://*:1980')
while True:
    message = socket.recv()
    print message
    socket.send("I'm here")

Client

import zmq
context = zmq.Context()
socket = context.socket(
    zmq.REQ)
a = 'tcp://192.168.1.6:1980'
socket.connect(a)
for request in range(10):
    socket.send('You home?')
    message = socket.recv()
    print message

Now we can use traits and enaml to make a pretty UI on the client side. Check out the acc_plot demo in the github repo to see an example of the Pi streaming data over the wire to be plotted by a client.


Control a servo

Servos are (often small) motors which you can drive to certain positions. For example, for a given servo you may be able to set the drive shaft from 0 to 18o degrees, or anywhere in between. As you can imagine, this could be useful for a lot of tasks, not least of which is robotics.

Shaft rotation is controlled by Pulse Width Modulation (PWM) in which you encode information in the duration of a high voltage pulse on the GPIO pins. Most hobby servos follow a standard pulse width meaning. A 0.5 ms pulse means go to your min position and a 2.5 ms pulse means go to your max position. Now repeat this pulse every 20 ms and you’re controlling a servo.

PWM Diagram

The pulse width is much more critical than the frequency

These kind of timings are not possible with Python. In fact, they aren’t really possible with a modern operating system. An interrupt could come in at any time in your control code, causing a longer than desired pulse and a jitter in your servo. To meet the timing requirements we have to enter the fun world of kernel modules. ServoBlaster is a kernel module that makes use of the DMA control blocks to bypass the CPU entirely. When loaded, the kernel module opens a device file at /dev/servoblaster that you can write position commands to.

I’ve written a small object oriented layer around this that makes servo control simpler. You can find my library here:

https://github.com/jminardi/RobotBrain

Simple connect the servo to 5v and ground on your Pi and then connect the control wire to pin 4.

Servo Diagram

The python code is quite simple:

import time
import numpy as np
from robot_brain.servo import Servo
servo = Servo(0, min=60, max=200)
for val in np.arange(0, 1, 0.05):
    servo.set(val)
    time.sleep(0.1)

All you have to do is instantiate a servo and call its set() method with a floating point value between 0 and 1. Check out the servo_slider demo on github to see servo control implemented over the network.

SciPy 2013 Conference Recap

Author: Eric Jones

Another year, another great conference.  Man, this thing grew a ton this year.  At final count, we had something like 340 participants which is way up from last year’s 200 or so attendees.  In fact, we had to close registration a couple of weeks before the program because that is all our venue could hold.  We’ll solve that next year.  Invite your friends.  We’d love to see 600 or even more.

Many thanks to the organizing team.  Andy Terrell and Jonathan Rocher did an amazing job as conference chairs this year both managing that growth and keeping the trains on time.  We expanded to 3 parallel sessions this year, which often made me want to be in 3 places at once.  Didn’t work.  Thankfully, the videos for all the talks and sessions are available online.  The video team really did a great job — thanks a ton.

I’ve wondered whether the size would change the feel of the conference, but I’m happy to report it still feels like an gathering of friends, new and old.  Aric Hagberg mentioned he thinks this is because it’s such a varied (motley?) crowd from disparate fields gathered to teach, learn, and share software tools and ideas.  This fosters a different atmosphere than some academic conferences where sparring about details of a talk is a common sport.  Hmh.  Re-watching the videos, I see Fernando Perez mentions this as well.

Thanks again to all who organized and all who attended.  I’m already looking forward to seeing you again next year.  Below are my personal musings on various topics at the conference:

  • The tutorials were, as usual, extremely well attended.  I spent the majority of my time there in the scikits learn track by Gael VaroquauxOlivier Grisel, and Jake VanderPlas.  Jeez, has this project gone far.  It is stunning to see the breath and quality of the algorithms that they have.  It’s obviously a hot topic these days; it is great to have such an important tool set at our disposal.
  • Fernando Perez gave a keynote this year about IPython.  We can safely say that 2013 is the year of the IPython notebook.  It was *everywhere*.  I’d guess 80+% of the talks and tutorials for the conference used it in their presentations.  Fernando went one step further, and his slide deck was actually live IPython notebooks.  Quite cool.  I do believe it’ll change the way people teach Python…  But, the most impressive thing is that Fernando still has and can execute the original 250 line script that was IPython 0.00001.  Scratch that.  The most impressive thing is to hear how Fernando has managed to build a community and a project that is now supported by a $1.1M grant from the Sloan foundation.  Well done sir.  The IPython project really does set the standard on so many levels.
  • Olivier Grisel, of scikits learn fame, gave a keynote on trends in machine learning.  It was really nice because he talked about the history of neural networks and the advances that have been made in “deep learning” in recent years.  I began grad school in NN research, and was embarrassed to realize how recent (1986) the back propagation learning algorithm was when I first coded it for research (1993).  It seemed old to me then — but I guess 7 years to a 23 year is, well, pretty old.  Over the years, I became a bit disenchanted with neural nets because they didn’t reveal the underlying physical process within the data.  I still have this bias, but Olivier’s discussion of the “deep learning” advances convinced me that I should get re-educated.  And, perhaps I’m getting more pragmatic as the gray hairs fill in (and the bald spot grows).  It does look like it’s effective for multiple problems in the detection and classification world.
  • William Schroeder, CEO of Kitware, gave a keynote on the importance of reproducible research which was one of the conference themes.  It was a privilege to have him because of the many ways Kitware illuminated the path for high quality scientific software in the open source world with VTK.  I’ve used it both in C++ and, of course, from Python for many, many years.  In his talk, Will talked about the existing scientific publication model doesn’t work so well anymore, and that, in fact, with the web and tools that are now available, direct publishing of results is the future together with publishing our data sets and code that generated them.  This actually dovetailed really well with Fernando’s talk, and I can’t help but think that we are on this track.
  • David Li has been working with the SymPy team, and his talk showed off the SymPy Live site that they have built to interactively try out symbolic calculations on the web.  I believe David is the 2nd high school student to present in the history of SciPy, yes? (Evan Patterson was the other that I remember)  Heh.  Aaand, what were you doing your senior year?  Both were composed, confident, and dang good — bodes well for our future.
  • There are always a few talks of the “what I have learned” flavor at Python.  This year, Brian Granger of IPython fame gave one about the dangers of features and the benefits of bugs.  Brian’s talks are almost always one of my favorites (sorta like I always make sure to see what crazy stuff David Beazley presents at PyCon).  Part of it is that he often talks about parallel computing for the masses which is dear to my heart, but it is also because he organizes his topics so well.
  • Nicholas Kridler also unexpectedly hooked me with another one of these talks.  I was walking out of conference hall after the keynote to go see what silly things the ever smiling Jake Vanderplas might be up to in his astronomy talk.  But derned if Nicholas didn’t start walking through how he approaches new machine learning problems in interesting ways.  My steps slowed, and I finally sat down, happy to know that I could watch Jake’s talk later.  Nicholas used his wits and scikits learn to win(!) the Kaggle whale detection competition earlier this year, and he gave us a great overview of how he did it.  Well worth a listen.
  • Both Brian and Nicholas’ talks started me thinking how much I like to see how experts approach problems.  The pros writing all the cool libraries often give talks on the features of their tools or the results of their research, but we rarely get a glimpse into their day-to-day process.  Sorta like pair programming with Martin Chilvers is a life changing experience (heh.  for better or worse… :-)), could we have a series of talks where we get to ride shotgun with a number of different people and see how they work?  How does Ondrej Certik work through a debugging session on SymPy development?  Does his shiny new cowboy hat from Allen Boots help or not?  When approaching a new simulation or analysis, how does Aric Hagberg use graph theory (and Networkx) to set the problem up?  When Serge Rey gets a new set of geospatial data, what are the common things he does to clean and organize the data for analysis with PySAL?  How does Wes McKinney think through API design trade-offs as he builds Pandas?  And, most importantly, how does Stefan Van Der Walt get the front of his hair to stand up like that? (comb or brush? hair dryer on low or high?)  Ok, maybe not Stefan, but you get the idea.  We always see a polished 25 minute presentation that sums up months or years of work that we all know had many false starts and painful points.  If we could learn about where people stubbed their toe and how to avoid it in our work, it would be pretty cool.  Just an idea, but I will run it by the committee for next year and see if there is any interest.
  • The sprints were nothing short of awesome.  Something like 130+ people were there on the first day sprinting on 10-20 different libraries including SymPy, NumPy, IPython, Matplotlib as well as more specific tools like scikits image and PySal.  Amazing to see.  Perhaps the bigger surprise was that at least half also stayed for Saturday’s sprints.  scikits learn had a team of about 10 people that worked two full days together (Friday and Saturday activity visible on the commit graph), and I think multiple other groups did as well.  While we’ve held sprints for a while, we had 2 top 3 times as many people as 2012, and this year’s can only be described as wildly successful.

  • While I was there, I spent most of my time checking in on the PySide sprint where John Erhsman of Wingware got a new release ready for the 4.8 series of Qt (bless him), and Robin Dunn, Corran Webster, Stefan Landgovt, and John Wiggins investigated paths forward toward 5.x compatibility.  No one was too excited about Shiboken, but the alternatives are also not a walk in the park.  I think the feeling is, long term, we’ll need to bite the bullet and go a different direction than Shiboken.

Avoiding “Excel Hell!” using a Python-based Toolchain

Update (Feb 6, 2014):  Enthought is now the exclusive distributor of PyXLL, a solution that helps users avoid “Excel Hell” by making it easy to develop add-ins for Excel in Python. Learn more here.

Didrik Pinte gave an informative, provocatively-titled presentation at the second, in-person New York Quantitative Python User’s Group (NY QPUG) meeting earlier this month.

There are a lot of examples in the press of Excel workflow mess-ups and spreadsheet errors contributing to some eye-popping mishaps in the finance world (e.g. JP Morgan’s spreadsheet issues may have led to the 2012 massive loss at “the London Whale”). Most of these can be traced to similar fundamental issues:

  • Data referencing/traceability

  • Numerical errors

  • Error-prone manual operations (cut & paste, …)

  • Tracing IO’s in libraries/API’s

  • Missing version control

  • Toolchain that doesn’t meet the needs of researchers, analysts, IT, etc.

Python, the coding language and its tool ecosystem, can provide a nice solution to these challenges, and many organizations are already turning to Python-based workflows in response. And with integration tools like PyXLL (to execute Python functions within Excel) and others, organizations can adopt Python-based workflows incrementally and start improving their current “Excel Hell” situation quickly.

For the details check out the video of Didrik’s NY QPUG presentation.  He demonstrates a an example solution using PyXLL and Enthought Canopy.

[vimeo 67327735 http://vimeo.com/67327735]

And grab the PDF of his slides here.

QPUG_20130514_ExcelHell_Slides

It would be great to hear your stories about “Excel Hell”. Let us know below.

–Brett Murphy

DataGotham…Complete!

Well, DataGotham is over. The conference featured a wide cross section of the data community in NYC. Talks spanned topics from “urban science” to “finding racism on FourSquare” to “creating an API for spaces.” Don’t worry, the videos will be online soon so you can investigate yourself. The organizers did a great job putting a conference of this size together on relatively short notice. Bravo NYC data crunchers!

One thing I somehow missed was a network graph created by the organizers to illustrate the tools used by attendees. I am happy to see python leading the way! The thickness of the edge indicates the number of people using both tools. It seems there are a lot of people trying to make Python and R “two great tastes that go great together.” I’m curious as to why more Python users aren’t using numpy and scipy. Food for thought…

Got tools?