Python has a broad range of tools for data analysis and visualization. While Excel is able to produce various types of plots, sometimes it’s either not quite good enough or it’s just preferable to use matplotlib.
Users already familiar with matplotlib will be aware that when showing a plot as part of a Python script the script stops while a plot is shown and continues once the user has closed it. When doing the same in an IPython console when a plot is shown control returns to the IPython prompt immediately, which is useful for interactive development.
Something that has been asked a couple of times is how to use matplotlib within Excel using PyXLL. As matplotlib is just a Python package like any other it can be imported and used in the same way as from any Python script. The difficulty is that when showing a plot the call to matplotlib blocks and so control isn’t returned to Excel until the user closes the window.
This blog shows how to plot data from Excel using matplotlib and PyXLL so that Excel can continue to be used while a plot window is active, and so that same window can be updated whenever the data in Excel is updated. Continue reading →
On May 28, 2014 Phillip Cloud, core contributor for the Pandas data analytics Python library, spoke at a joint meetup of the New York Quantitative Python User’s Group (NY QPUG) and the NY Finance PUG. Enthought hosted and about 60 people joined us to listen to Phillip present some of the less-well-known, but really useful features that have come out since Pandas version 0.11 and some that are coming soon. We all learned more about how to take full advantage of the Pandas Python library, and got a better sense of how excited Phillip was to discover Pandas during his graduate work.
After a fairly comprehensive overview of Pandas, Phillip got into the new features. In version 0.11 he covered: Continue reading →
Today Enthought announced that it is now the worldwide distributor for PyXLL, and we’re excited to offer this key product for deploying Python models, algorithms and code to Excel. Technical teams can use the full power of Enthought Canopy, or another Python distro, and end-users can access the results in their familiar Excel environment. And it’s straightforward to set up and use.
Installing PyXLL from Enthought Canopy
PyXLL is available as a package subscription (with significant discounts for multiple users). Once you’ve purchased a subscription you can easily install it via Canopy’s Package Manager as shown in the screenshots below (note that at this time PyXLL is only available for Windows users). The rest of the configuration instructions are in the Quick Start portion of the documentation. PyXLL itself is a plug-in to Excel. When you start Excel, PyXLL loads into Excel and reads in Python modules that you have created for PyXLL. This makes PyXLL especially useful for organizations that want to manage their code centrally and deploy to multiple Excel users.
Creating Excel Functions with PyXLL
To create a PyXLL Python Excel function, you use the @xl_func decorator to tell PyXLL the following function should be registered with Excel, what its argument types are, and optionally what its return type is. PyXLL also reads the function’s docstring and provides that in the Excel function description. As an example, I created a module my_pyxll_module.py and registered it with PyXLL via the Continue reading →
Earlier this month at the first New York Finance Python User’s Group (NY FPUG) meetup, Kelsey Jordahl talked about how PyQL streamlines the development of Python-based finance applications using QuantLib. There were about 30 people attending the talk at the Cornell Club in New York City. We have a recording of the presentation below.
QuantLib is a free, open-source (BSD-licensed) quantitative finance package. It provides tools for financial instruments, yield curves, pricing engines, creating simulations, and date / time management. There is a lot more detail on the QuantLib website along with the latest downloads. Kelsey refers to a really useful blog / open-source book by one of the core QuantLib developers on implementing QuantLib. Quantlib also comes with different language bindings, including Python.
So why use PyQL if there are already Python bindings in QuantLib? Well, PyQL provides a much more Pythonic set of APIs, in short. Kelsey discusses some of the differences between the original QuantLib Python API and the PyQL API and how PyQL streamlines the resulting Python code. You get better integration with other packages like NumPy, better namespace usage and better documentation. PyQL is available up on GitHub in the PyQL repo. Kelsey uses the IPython Notebooks in the examples directory to explore PyQL and QuantLib and compares the use of PyQL versus the standard (SWIG) QuantLib Python APIs.
PyQL remains a work in progress, with goals to make its QuantLib coverage more complete, the API even more Pythonic, and getting a successful build on Windows (works on Mac OS and Linux now). It’s open source, so feel free to step up and contribute!
We posted a recording of a 30 minute webinar that we did on the 20th that covers what Canopy is and why we developed it. There’s a few minutes of Brett Murphy(Product Manager at Enthought) discussing the “why” with some slides, and then Jason McCampbell (Development Manager for Canopy) gets into the interesting part with a 15+ minute demo of some of the key capabilities and workflows in Canopy. If you would like to watch the recorded webinar, you can find it here (the different formats will play directly in different browsers so check them and you won’t have to download the whole recording first):
Summed up in one line: Canopy provides the minimal set of tools for non-programmers to access, analyze and visualize data in an open-source Python environment.
The challenge in the past for scientists, engineers and analysts who wanted to use Python had been pulling together a working, integrated Python environment for scientific computing. Finding compatible versions of the dozens of Python packages, compiling them and integrating it all was very time consuming. That’s why we released the Enthought Python Distribution (EPD) many years back. It provided a single install of all the major packages you needed to do scientific and analytic computing with Python.
But the primary interface for a user of EPD was the command line. For a scientist or analyst used to an environment like MATLAB or one of the R IDEs, the command line is a little unapproachable and makes Python challenging to adopt. This is why we developed Canopy.
Enthought Canopy is both a Python distribution (like EPD) and an analysis environment. The analysis environment includes an integrated editor and IPython prompt to faciliate script development & testing and data analysis & plotting. The graphical package manager becomes the main interface to the Python ecosystem with its package search, install and update capabilities. And the documentation browser makes online documentation for Canopy, Python and the popular Python packages available on the desktop.
Check out the Canopy demo in the recorded webinar (link above). We hope it’s helpful.
Update (Feb 6, 2014): Enthought is now the exclusive distributor of PyXLL, a solution that helps users avoid “Excel Hell” by making it easy to develop add-ins for Excel in Python. Learn more here.
Didrik Pinte gave an informative, provocatively-titled presentation at the second, in-person New York Quantitative Python User’s Group (NY QPUG) meeting earlier this month.
There are a lot of examples in the press of Excel workflow mess-ups and spreadsheet errors contributing to some eye-popping mishaps in the finance world (e.g. JP Morgan’s spreadsheet issues may have led to the 2012 massive loss at “the London Whale”). Most of these can be traced to similar fundamental issues:
Error-prone manual operations (cut & paste, …)
Tracing IO’s in libraries/API’s
Missing version control
Toolchain that doesn’t meet the needs of researchers, analysts, IT, etc.
Python, the coding language and its tool ecosystem, can provide a nice solution to these challenges, and many organizations are already turning to Python-based workflows in response. And with integration tools like PyXLL (to execute Python functions within Excel) and others, organizations can adopt Python-based workflows incrementally and start improving their current “Excel Hell” situation quickly.
For the details check out the video of Didrik’s NY QPUG presentation. He demonstrates a an example solution using PyXLL and Enthought Canopy.
Though all eyes are probably on the aftermath of Pycon (which, from all reports, was another great conference), Enthought was happy to sponsor the first New York Quantitative Python User Group Meetup (wow that’s a mouthful) on March 6th. If you are in the New York area, you can sign up for the group here.
Our contribution to the evening focused on the the theme “From Research to Application.” And by “research” we meant both research code (Marcos’ CLA code is one example) and actual investment research. Firms are wrestling with data and trying to marshal all the expertise within the organization to make decisions. Increasingly, software is being used to help synthesize this information. In our thought experiment, we imagined a hypothetical portfolio manager or strategist that is trying to integrate the quantitative and fundamental expertise within the firm. What kind of information would this PM want to see? How could we make the application visually appealing and intuitively interactive?
We chose to use the Black-Litterman model to tie some of these threads together. In a nutshell, Black-Litterman takes a Bayesian approach to portfolio optimization. It assumes that the capital allocations in the market are decent and reverses the classical optimization process to infer expected returns (rather than weights). It also allows modification of these expected returns to reflect analyst views on a particular asset. For those of you not familiar with this subject, you can find an accessible discussion of the approach in He and Litterman (1999). Using the Black-Litterman model as our organizing principle, we put together an application that provides context for historical returns, relative value, and pairwise asset correlations, all wired together to provide full interactivity.
Given the limited time we had to put this together, there are obviously things we would have changed and things we would have liked to include. Nevertheless, we think the demo is a good example of how one can use open source technology to not only take advantage of research code but also integrate quantitative models and fundamental research.
FYI, the libraries used in the app are: Numpy/Pandas, Scipy, Traits, Chaco, and Enaml.
Videos of the talks are below. Tell us what you think!
Over 80 people attended last night’s London Financial Python User Group (LFPUG), with presentations given by Den Pilsworth of AHL/MAN, Eric Jones of Enthought, and Wes Mckinney of Pandas fame. It was an evening filled with practical content, so come on out for the next meetup if you are in town (or for drinks at the pub afterwards)!
The agenda for the evening:
1. “Moving an algo business from R and Java to Python”, Dennis Pilsworth, AHL, Man Group
2. “Financial data analysis in Python with pandas”, Wes McKinney
3. “Fostering Python Adoption within a Company”, Eric Jones, Enthought.
Den presented a case study of how his firm introduced Python into production and ensured that “network distributed” deployment worked quickly enough to ensure good local response time with out overloading the network. He also discussed visualization and pointed out native Python tools need some work to remain competitive with the R user’s sweetheart, ggplot2. He graciously acknowledged the role Enthought’s training played in getting things rolling.
Wes Mckinney discussed the latest Pandas developments, particularly the Group-by function. A number of attendees were interested in potentially using this functionality to replace Excel pivot tables. Make sure to check out Wes’ new book, “Python for Data Analysis.”
Eric Jones discussed how to get Python adopted in the face of opposition, featuring some of the classic objections (e.g. “Python is too slow”).
LFPUG meets roughly every other month, so look us up on LinkedIn and keep an eye out for the next meeting!