5 Simple Steps to Create a Real-Time Twitter Feed in Excel using Python and PyXLL

PyXLL 3.0 introduced a new, simpler, way of streaming real time data to Excel from Python.

Excel has had support for real time data (RTD) for a long time, but it requires a certain knowledge of COM to get it to work. With the new RTD features in PyXLL 3.0 it is now a lot simpler to get streaming data into Excel without having to write any COM code.

This blog will show how to build a simple real time data feed from Twitter in Python using the tweepy package, and then show how to stream that data into Excel using PyXLL.

(Note: The code from this blog is available on github https://github.com/pyxll/pyxll-examples/tree/master/twitter). 

Create a real-time Twitter data feed using Python and PyXLL.

Create a real-time Twitter feed in Excel using Python and PyXLL.

Step 1: Install tweepy and PyXLL

As we are interested in real time data we will use tweepy’s streaming API to use Python to connect to Twitter. Details on this are available in the tweepy documentation. You can install tweepy and PyXLL from the Canopy package manager. You may also download PyXLL here.

Easily install Tweepy from Canopy's Package Manager.

Easily install Tweepy and PyXLL from Canopy’s Package Manager.

Step 2: Get Twitter API keys

In order to access Twitter Streaming API you will need a Twitter API key, API secret, Access token and Access token secret. Follow the steps below to get your own access tokens.

1. Create a twitter account if you do not already have one.
2. Go to https://apps.twitter.com/ and log in with your twitter credentials.
3. Click “Create New App”.
4. Fill out the form, agree to the terms, and click “Create your Twitter application”
5. In the next page, click on “API keys” tab, and copy your “API key” and “API secret”.
6. Scroll down and click “Create my access token”, and copy your “Access token” and “Access token secret”.

Step 3: Create a Stream Listener Class to Print Tweets in Python

To start with we can create a simple listener class that simply prints tweets as they arrive

[sourcecode language=”python”]
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
import logging

_log = logging.getLogger(__name__)

# User credentials to access Twitter API
access_token = “YOUR ACCESS TOKEN”
access_token_secret = “YOUR ACCESS TOKEN SECRET”
consumer_key = “YOUR CONSUMER KEY”
consumer_secret = “YOUR CONSUMER KEY SECRET”

class TwitterListener(StreamListener):

def __init__(self, phrases):
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
self.__stream = Stream(auth, listener=self)
self.__stream.filter(track=phrases, async=True)

def disconnect(self):

def on_data(self, data):

def on_error(self, status):

if __name__ == ‘__main__’:
import time

phrases = [“python”, “excel”, “pyxll”]
listener = TwitterListener(phrases)

# listen for 60 seconds then stop

If we run this code any tweets mentioning Python, Excel or PyXLL get printed:

python twitterxl.py

INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): stream.twitter.com
{“text”: “Excel keyboard shortcut – CTRL+1 to bring up Cell Formatting https://t.co/wvx634EpUy”, “is…
{“text”: “Excel Tips – What If Analysis #DMZWorld #Feature #Bond #UMI https://t.co/lxzgZnIItu #UMI”,…
{“text”: “How good are you at using #Excel? We’re looking for South Africa’s #ExcelChamp Ts & Cs…
{“text”: “The Best Data Scientists Run R and Python – insideBIGDATA https://t.co/rwty058dL2 #python …
{“text”: “How to Create a Pivot Table in Excel: A Step-by-Step Tutorial (With Video) \u2013 https://…
{“text”: “Python eats Alligator 02, Time Lapse Speed x6 https://t.co/3km8I92zJo”, “is_quote_status”:…

Process finished with exit code 0

In order to make this more suitable for getting these tweets into Excel we will now extend this TwitterListener class in the following ways:

– Broadcast updates to other *subscribers* instead of just printing tweets.
– Keep a buffer of the last few received tweets.
– One ever create one listener for each unique set of phrases.
– Automatically disconnect listeners with no subscribers.

The updated TwitterListener class is as follows:

[sourcecode language=”python”]
class TwitterListener(StreamListener):
“””tweepy.StreamListener that notifies multiple subscribers when
new tweets are received and keeps a buffer of the last 100 tweets
__listeners = {} # class level cache of listeners, keyed by phrases
__lock = threading.RLock()
__max_size = 100

def get_listener(cls, phrases, subscriber):
“””Fetch an ExcelListener listening to a set of phrases and subscribe to it”””
with cls.__lock:
# get the listener from the cache or create a new one
phrases = frozenset(map(str, phrases))
listener = cls.__listeners.get(phrases, None)
if listener is None:
listener = cls(phrases)
cls.__listeners[phrases] = listener

# add the subscription and return
return listener

def __init__(self, phrases):
“””Use static method ‘get_listener’ instead of constructing directly”””
_log.info(“Creating listener for [%s]” % “, “.join(phrases))
self.__phrases = phrases
self.__subscriptions = set()
self.__tweets = [None] * self.__max_size

# listen for tweets in a background thread using the ‘async’ keyword
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
self.__stream = Stream(auth, listener=self)
self.__stream.filter(track=phrases, async=True)
self.__connected = True

def tweets(self):
return list(self.__tweets)

def subscribe(self, subscriber):
“””Add a subscriber that will be notified when new tweets are received”””
with self.__lock:

def unsubscribe(self, subscriber):
“””Remove subscriber added previously.
When there are no more subscribers the listener is stopped.
with self.__lock:
if not self.__subscriptions:

def disconnect(self):
“””Disconnect from the twitter stream and remove from the cache of listeners.”””
with self.__lock:
if self.__connected:
_log.info(“Disconnecting twitter stream for [%s]” % “, “.join(self.__phrases))
self.__connected = False

def disconnect_all(cls):
“””Disconnect all listeners.”””
with cls.__lock:
for listener in list(cls.__listeners.values()):

def on_data(self, data):
data = json.loads(data)
with self.__lock:
self.__tweets.insert(0, data)
self.__tweets = self.__tweets[:self.__max_size]
for subscriber in self.__subscriptions:
_log.error(“Error calling subscriber”, exc_info=True)
return True

def on_error(self, status):
with self.__lock:
for subscriber in self.__subscriptions:
_log.error(“Error calling subscriber”, exc_info=True)

if __name__ == ‘__main__’:
import time

class TestSubscriber(object):
“””simple subscriber that just prints tweets as they arrive”””

def on_error(self, status):
print(“Error: %s” % status)

def on_data(self, data):

subscriber = TestSubscriber()
listener = TwitterListener.get_listener([“python”, “excel”, “pyxll”], subscriber)

# listen for 60 seconds then stop

When this is run it’s very similar to the last case, except that now only the text part of the tweets are printed. Also note that the listener is not explicitly disconnected, that happens automatically when the last subscriber unsubscribes.

python twitterxl.py

INFO:__main__:Creating listener for python, excel, pyxll
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): stream.twitter.com
Linuxtoday Make a visual novel with Python: Linux User & Developer: Bridge the gap between books…
How to create drop down list in excel https://t.co/Ii2hKRlRBe…
RT @papisdotio: Flying dron with Python @theglamp #PAPIsConnect https://t.co/zzPNSFb66e…
RT @saaid230: The reason I work hard and try to excel at everything I do so one day I can take care …
RT @javacodegeeks: I’m reading 10 Awesome #Python Tutorials to Kick-Start my Web #Programming https:…
INFO:__main__:Disconnecting twitter stream for

[python 1=”excel,” 2=”pyxll” language=”,”][/python]

Process finished with exit code 0

Step 4: Feed the real time Twitter data into Excel using PyXLL

Now the hard part of getting the streaming Twitter data into Python is taken care of, creating a real time data source in Excel using PyXLL is pretty straightforward.

PyXLL 3.0 has a new class, `RTD`. When a function decorated with the `xl_func` decorator returns an RTD instance, the value of the calling cell will be the `value` property of the RTD instance. If the value property of the returned RTD instance later changes, the cell value changes.

We will write a new class inheriting from RTD that acts as a subscriber to our twitter stream (in the same way as TestSubscriber in the code above). Whenever a new tweet is received it will update its value, and so the cell in Excel will update.

[sourcecode language=”python”]
from pyxll import RTD

class TwitterRTD(RTD):
“””Twitter RTD class that notifies Excel whenever a new tweet is received.”””

def __init__(self, phrases):
# call super class __init__ with an initial value
super(TwitterRTD, self).__init__(value=”Waiting for tweets…”)

# get the TwitterListener and subscribe to it
self.__listener = TwitterListener.get_listener(phrases, self)

def disconnect(self):
# overridden from RTD base class. Called when Excel no longer
# needs the RTD object (for example, when the cell formula
# is changed.

def on_error(self, status):
self.value = “#ERROR %s” % status

def on_data(self, data):
self.value = data.get(“text”)

To expose that to Excel all that’s needed is a function that returns an instance of our new TwitterRTD class

[sourcecode language=”python”]
from pyxll import xl_func

@xl_func(“string[] phrases: rtd”)
def twitter_listen(phrases):
“””Listen for tweets containing certain phrases”””
# flatten the 2d list of lists into a single list of phrases
phrases = [str(x) for x in itertools.chain(*phrases) if x]
assert len(phrases) > 0, “At least one phrase is required”

# return our TwitterRTD object that will update when a tweet is received
return TwitterRTD(phrases)

All that’s required now is to add the module to the pyxll.cfg file, and then the new function ‘twitter_listen’ will appear in Excel, and calling it will return a live stream of tweets.


Step 5: Enrich the feed information with Tweet metadata

So far we’ve got live tweets streaming into Excel, which is pretty cool, but only one tweet is visible at a time and we can only see the tweet text. It would be even better to see a grid of data showing the most recent tweets with some metadata as well as the tweet itself.

RTD functions always return just a single cell of data, so what we need to do is write a slightly different function that takes a couple more arguments: A key for the part of the tweet we want (e.g. ‘text’ or ‘created_at’) and an index (e.g. 0 as the latest tweet, 1 the second most recent tweet etc).

As some interesting bits of metadata are in nested dictionaries in the twitter data, the ‘key’ used to select the item from the data dictionary is a ‘/’ delimited list of keys used to drill into tweet data (for example, the name of the user is in the sub-dictionary ‘user’, so to retrieve it the key ‘user/name’ would be used).

The TwitterListener class we’ve written already keeps a limited history of the tweets it’s received so this isn’t too much more than we’ve already done.

[sourcecode language=”python”]
class TwitterRTD(RTD):
“””Twitter RTD class that notifies Excel whenever a new tweet is received.”””

def __init__(self, phrases, row=0, key=”text”):
super(TwitterRTD, self).__init__(value=”Waiting for tweets…”)
self.__listener = TwitterListener.get_listener(phrases, self)
self.__row = row
self.__key = key

def disconnect(self):

def on_error(self, status):
self.value = “#ERROR %s” % status

def on_data(self, data):
# if there are no tweets for this row return an empty string
tweets = self.__listener.tweets
if len(tweets) < self.__row or not tweets[self.__row]: self.value = “” return # get the value from the tweets value = tweets[self.__row] for key in self.__key.split(“/”): if not isinstance(value, dict): value = “” break value = value.get(key, {}) # set the value back in Excel self.value = str(value) [/sourcecode] The worksheet function also has to be updated to take these extra arguments [sourcecode language=”python”] @xl_func(“string[] phrases, int row, string key: rtd”) def twitter_listen(phrases, row=0, key=”text”): “””Listen for tweets containing certain phrases””” # flatten the 2d list of lists into a single list of phrases phrases = [str(x) for x in itertools.chain(*phrases) if x] assert len(phrases) > 0, “At least one phrase is required”

# return our TwitterRTD object that will update when a tweet is received
return TwitterRTD(phrases, row, key)

After reloading the PyXLL addin, or restarting Excel, we can now call this modified function with different values for row and key to build an updating grid of live tweets.


One final step is to make sure that any active streams are disconnected when Excel closes. This will prevent the tweepy background thread from preventing Excel from exiting cleanly.

[sourcecode language=”python”]
from pyxll import xl_on_close

def disconnect_all_listeners():

The code from this blog is available on github https://github.com/pyxll/pyxll-examples/tree/master/twitter.

AAPG 2016 Conference Technical Presentation: Unlocking Whole Core CT Data for Advanced Description and Analysis

Microscale Imaging for Unconventional Plays Track Technical Presentation:

Unlocking Whole Core CT Data for Advanced Description and Analysis

Brendon Hall, Geoscience Applications Engineer, EnthoughtAmerican Association of Petroleum Geophysicists (AAPG)
2016 Annual Convention and Exposition Technical Presentation
Tuesday June 21st at 4:15 PM, Hall B, Room 2, BMO Centre, Calgary

Presented by: Brendon Hall, Geoscience Applications Engineer, Enthought, and Andrew Govert, Geologist, Cimarex Energy


It has become an industry standard for whole-core X-ray computed tomography (CT) scans to be collected over cored intervals. The resulting data is typically presented as static 2D images, video scans, and as 1D density curves.

CT scan of core pre- and post-processing

CT scans of cores before and after processing to remove artifacts and normalize features.

However, the CT volume is a rich data set of compositional and textural information that can be incorporated into core description and analysis workflows. In order to access this information the raw CT data initially has to be processed to remove artifacts such as the aluminum tubing, wax casing and mud filtrate. CT scanning effects such as beam hardening are also accounted for. The resulting data is combined into contiguous volume of CT intensity values which can be directly calibrated to plug bulk density.

With this processed CT data:

  • The volume can be analyzed to identify bedding structure, dip angle, and fractures.
  • Bioturbation structures can often be easily identified by contrasts in CT intensity values due to sediment reworking or mineralization.
  • CT facies can be determined by segmenting the intensity histogram distribution. This provides continuous facies curves along the core, indicating relative amounts of each material. These curves can be integrated to provide estimates of net to gross even in finely interbedded regions. Individual curves often exhibit cyclic patterns that can help put the core in the proper sequence stratigraphic framework.
  • The CT volume can be analyzed to classify the spatial relationships between the intensity values to give a measure of texture. This can be used to further discriminate between facies that have similar composition but different internal organization.
  • Finally these CT derived features can be used alongside log data and core photographs to train machine learning algorithms to assist with upscaling the core description to the entire well.
Virtual Core Co-Visualization

Virtual Core software allows for co-visualization and macro- to microscopic investigation of core features.

Webinar: Fast Forward Through the “Dirty Work” of Data Analysis: New Python Data Import and Manipulation Tool Makes Short Work of Data Munging Drudgery

Python Import & Manipulation Tool Intro Webinar

No matter whether you are a data scientist, quantitative analyst, or an engineer, whether you are evaluating consumer purchase behavior, stock portfolios, or design simulation results, your data analysis workflow probably looks a lot like this:

Acquire > Wrangle > Analyze and Model > Share and Refine > Publish

The problem is that often 50 to 80 percent of time is spent wading through the tedium of the first two stepsacquiring and wrangling data – before even getting to the real work of analysis and insight. (See The New York Times, For Big-Data Scientists, ‘Janitor Work’ Is Key Hurdle to Insights)


Enthought Canopy Data Import Tool

Try the Data Import Tool with your own data. Download here.

In this webinar we’ll demonstrate how the new Canopy Data Import Tool can significantly reduce the time you spend on data analysis “dirty work,” by helping you:

  • Load various data file types and URLs containing embedded tables into Pandas DataFrames
  • Perform common data munging tasks that improve raw data
  • Handle complicated and/or messy data
  • Extend the work done with the tool to other data files


Webinar sample data sets:

Download a zip file of the example data sets

  1. Example 1 data set: bob-ross-elements
  2. Example 2 data set: pigeon-racing-results
  3. Example 3 data set: Oklahoma oil and gas well data: http://www.occeweb.com/og/ogdatafiles2.htm


Simply download Canopy and click on the “Data Import Tool” icon on the Welcome Screen.

Canopy Data Import Tool - Free Trial

Just Released: PyXLL v 3.0 (Python in Excel). New Real Time Data Stream Capabilities, Excel Ribbon Integration, and More.

Download a free 30 day trial of PyXLL and try it with your own data.

Since PyXLL was first released back in 2010 it has grown hugely in popularity and is used by businesses in many different sectors.

The original motivation for PyXLL was to be able to use all the best bits of Excel combined with a modern programming language for scientific computing, in a way that fits naturally and works seamlessly.

Since the beginning, PyXLL development focused on the things that really matter for creating useful real-world spreadsheets; worksheet functions and macro functions. Without these all you can do is just drive Excel by poking numbers in and reading numbers out. At the time the first version of PyXLL was released, that was already possibly using COM, and so providing yet another API to do the same was seen as little value add. On the other hand, being able to write functions and macros in Python opens up possibilities that previously were only available in VBA or writing complicated Excel Addins in C++ or C#.

With the release of PyXLL 3, integrating your Python code into Excel has become more enjoyable than ever. Many things have been simplified to get you up and running faster, and there are some major new features to explore.

  • If you are new to PyXLL have a look at the Getting Started section of the documentation.
  • All the features of PyXLL, including these new ones, can be found in the Documentation


1. Ribbon Customization

Screen Shot 2016-02-29 at 15.57.12

Ever wanted to write an add-in that uses the Excel ribbon interface? Previously the only way to do this was to write a COM add-in, which requires a lot of knowledge, skill and perseverance! Now you can do it with PyXLL by defining your ribbon as an XML document and adding it to your PyXLL config. All the callbacks between Excel and your Python code are handled for you.

See the Customizing the Ribbon for more detailed information or try the example included in the download.

2. RTD (Real Time Data) Functions


PyXLL can stream live data into your spreadsheet without you having to write any extra services or register any COM controls. Any Python function exposed to Excel through PyXLL can return a new RTD type that acts as a ticking data source; Excel updates whenever the returned RTD publishes new data.

See Real Time Data for more detailed information or try the example included in the download.

3. Function Signatures and Type Annotation

xl_func and xl_macro need to know the argument and return types to be
able to tell Excel how they should be called. In previous versions that was always done by passing a ‘signature’ string to these decorators.

Now in PyXLL 3 the signature is entirely optional. If a signature is not supplied PyXLL will inspect the function and determine the signature for you.

If you use Python type annotations when declaring the function, PyXLL will use those when determining the function signature. Otherwise all arguments and the return type will be assumed to be `var`.

4. Default Keyword Arguments

Python functions with default keyword arguments now preserve their default value when called from Excel with missing arguments. This means that a function like the one below
when called from Excel with b or c missing will be invoked with the correct default values for b and c.

 def func_with_kwargs(a, b=1, c=2):
 return a + b + c

 5. Deep Reloading

If you’ve used PyXLL for a while you will have noticed that when you reload PyXLL only the modules listed in your pyxll.cfg file get reloaded. If you are working on a project that has multiple modules and not all of them are added to the config those won’t get reloaded, even if modules that are listed in the config file import them.

PyXLL can now track all the imports made by each module listed in the config file, and when you reload PyXLL all of those modules will be reloaded in the right order.

This feature is enabled in the config file by setting

deep_reload = 1

6. Error Caching

Sometimes it’s not convenient to have to pick through the log file to determine why a particular cell is failing to calculate.

The new function get_last_error takes an XLCell or a COM Range and returns the last exception (and traceback) to have occurred in that cell.

This can be used in menu functions or other worksheet functions to give end users better feedback about any errors in the worksheet.

7. Python Functions for Reload and Rebind

PyXLL can now be reloaded or it can rebind its Excel functions using the new Python functions reload and rebind.

8. Better win32com and comtypes Support

PyXLL has always had some integration with the pythoncom module, but it required some user code to make it really useful. It didn’t have any direct integration with the higher level win32com package or the
comtypes package.

The new function xl_app returns the current Excel Application instance either as a pythoncom PyIDispatch instance, a win32com.client.Dispatch instance or a wrapped comtypes POINTER(IUnknown) instance.

You may specify which COM library you want to use with PyXLL in the pyxll.cfg file

com_package = <win32com, comtypes or pythoncom>

Download a free 30 day trial of PyXLL and see how PyXLL can help you use the power of Python to make Excel an even more powerful data analysis tool.

The Latest Features in Virtual Core: CT Scan, Photo, and Well Log Co-visualization

Enthought is pleased to announce Virtual Core 1.8.  Virtual Core automates aspects of core description for geologists, drastically reducing the time and effort required for core description, and its unified visualization interface displays cleansed whole-core CT data alongside core photographs and well logs.  It provides tools for geoscientists to analyze core data and extract features from sub-millimeter scale to the entire core.


NEW VIRTUAL CORE 1.8 FEATURE: Rotational Alignment on Core CT Sections

Virtual Core 1.8 introduces the ability to perform rotational alignment on core CT sections.  Core sections can become misaligned during extraction and data acquisition.   The alignment tool allows manual realignment of the individual core sections.  Wellbore image logs (like FMI) can be imported and used as a reference when aligning core sections.  The Digital Log Interchange Standard (DLIS) is now fully supported, and can be used to import and export data.


Whole-core CT scans are routinely performed on extracted well cores.  The data produced from these scans is typically presented as static 2D images of cross sections and video scans.  Images are limited to those provided by the vendor, and the raw data, if supplied, is difficult to analyze.  However, the CT volume is a rich 3D dataset of compositional and textural information that can be incorporated into core description and analysis workflows.

Enthought’s proprietary Clear Core technology is used to process the raw CT data, which is notoriously difficult to analyze.  Raw CT data is stored in 3 foot sections, with each section consisting of many thousands of individual slice images which are approximately .2 mm thick.  This data is first combined to create a contiguous volume of the entire core.  The volume is then analyzed to remove the core barrel and mud as well as correcting for scanning artifacts such as beam hardening.  The image below shows data before and after Clear Core processing.

Clear Core processing prepares CT data for additional analysis.

Automated feature detection is performed during processing to identify bed boundaries, lamination, dip angle and textural features of the core.  A number of advanced machine learning algorithms and image analysis techniques are used during this step.  It is also possible to perform feature detection on core photographs.

Virtual Core provides an integrated environment for the co-visualization of the CT data along with high resolution core photographs (white light and UV) and well logs.  Data can be imported using a variety of industry standard formats, such as LAS and DLIS.  Thin section images, plug data and custom annotations can be added and viewed at specific depths along with the core data.  A CT volume viewer provides a full 3D rendering of the interior of the core to investigate bioturbation and sedimentary structures.


Virtual Core 1.8 also includes an updated machine learning and classification tool.  This feature provides an interface for a user to identify a lithology class of interest, and then automatically determines whether other regions in the entire core belong to the class or not.  This can be used to rapidly identify intervals that have certain features in common, such as bedding structures or density composition.

Stay tuned in the coming weeks for more details on the specific capabilities and features of Virtual Core.  If you would like more information please get in touch with us.  We’d be happy to schedule a demonstration and discuss how Virtual Core can help you unlock your core CT data.


Canopy Geoscience: Python-Based Analysis Environment for Geoscience Data

Today we officially release Canopy Geoscience 0.10.0, our Python-based analysis environment for geoscience data.

Canopy Geoscience integrates data I/O, visualization, and programming, in an easy-to-use environment. Canopy Geoscience is tightly integrated with Enthought Canopy’s Python distribution, giving you access to hundreds of high-performance scientific libraries to extract information from your data.

The Canopy Geoscience environment allows easy exploration of your data in 2D or 3D. The data is accessible from the embedded Python environment, and can be analyzed, modified, and immediately visualized with simple Python commands.

Feature and capability highlights for Canopy Geoscience version 0.10.0 include:

  • Read and write common geoscience data formats (LAS, SEG-Y, Eclipse, …)
  • 3D and 2D visualization tools
  • Well log visualization
  • Conversion from depth to time domain is integrated in the visualization tools using flexible depth-time models
  • Integrated IPython shell to programmatically access and analyse the data
  • Integrated with the Canopy editor for scripting
  • Extensible with custom-made plugins to fit your personal workflow

Contact us to learn more about Canopy Geoscience!

The Canopy Geoscience Team

Data can be visualized in 2D using a map view, or along a traverse (inline, crossline, or user-defined). Data defined in time and depth is co-visualized by selecting a depth-time model from the toolbar.

In the 2D map visualization, you can select a seismic volume or horizon to provide the reference grid coordinates.

3D and 3D visualization includes corner-grid volumes.

Plotting in Excel with PyXLL and Matplotlib

Author: Tony Roberts, creator of PyXLL, a Python library that makes it possible to write add-ins for Microsoft Excel in Python. Download a FREE 30 day trial of PyXLL here.

Plotting in Excel with PyXLL and MatplotlibPython has a broad range of tools for data analysis and visualization. While Excel is able to produce various types of plots, sometimes it’s either not quite good enough or it’s just preferable to use matplotlib.

Users already familiar with matplotlib will be aware that when showing a plot as part of a Python script the script stops while a plot is shown and continues once the user has closed it. When doing the same in an IPython console when a plot is shown control returns to the IPython prompt immediately, which is useful for interactive development.

Something that has been asked a couple of times is how to use matplotlib within Excel using PyXLL. As matplotlib is just a Python package like any other it can be imported and used in the same way as from any Python script. The difficulty is that when showing a plot the call to matplotlib blocks and so control isn’t returned to Excel until the user closes the window.

This blog shows how to plot data from Excel using matplotlib and PyXLL so that Excel can continue to be used while a plot window is active, and so that same window can be updated whenever the data in Excel is updated. Continue reading

Enthought’s Prabhu Ramachandran Announced as Winner of Kenneth Gonsalves Award 2014 at PyCon India

From PyCon India: Published / 25 Sep 2014

PSSI [Python Software Society of India] is happy to announce that Prabhu Ramachandran, faculty member of Department of Aerospace Engineering, IIT Bombay [and managing director of Enthought India] is the winner of Kenneth Gonsalves Award, 2014.

Enthought's Prabhu Ramachandran, winner of Kenneth Gonsalves Award 2014

Prabhu has been active in the Open source and Python community for close to 15 years. He co-founded the Chennai LUG in 1998. He is also well known as the author and lead developer of the award winning Mayavi and TVTK Python packages. He also maintains PySPH, an open source framework for Smoothed Particle Hydrodynamics (SPH) simulations.

Prabhu is also Member of Board, Python Software Foundation since 2010 and is closely involved with the activities of FOSSEE and SciPy India. His research interests are primarily in particle methods and applied scientific computing.

Prabhu will be presented the Award on 27th Sep, the opening day of PyCon India 2014. PSSI and Team PyCon India would like to extend their hearty Congratulations to Prabhu for his achievement and wish him the very best for his future endeavours.


Congratulations Prabu, we’re honored to have you as part of the Enthought team!