Category Archives: Enthought Canopy

Using the Canopy Data Import Tool to Speed Cleaning and Transformation of Data & New Release Features

Enthought Canopy Data Import Tool

Download Canopy to try the Data Import Tool

In November 2016, we released Version 1.0.6 of the Data Import Tool (DIT), an addition to the Canopy data analysis environment. With the Data Import Tool, you can quickly import structured data files as Pandas DataFrames, clean and manipulate the data using a graphical interface, and create reusable Python scripts to speed future data wrangling.

For example, the Data Import Tool lets you delete rows and columns containing Null values or replace the Null values in the DataFrame with a specific value. It also allows you to create new columns from existing ones. All operations are logged and are reversible in the Data Import Tool so you can experiment with various workflows with safeguards against errors or forgetting steps.


What’s New in the Data Import Tool November 2016 Release

Pandas 0.19 support, re-usable templates for data munging, and more.

Over the last couple of releases, we added a number of new features and enhanced a number of existing ones. A few notable changes are:

  1. The Data Import Tool now supports the recently released Pandas version 0.19.0. With this update, the Tool now supports Pandas versions 0.16 through 0.19.
  2. The Data Import Tool now allows you to delete empty columns in the DataFrame, similar to existing option to delete empty rows.
  3. Tdelete-empty-columnshe Data Import Tool allows you to choose how to delete rows or columns containing Null values: “Any” or “All” methods are available.
  4. autosaved_scripts

    The Data Import Tool automatically generates a corresponding Python script for data manipulations performed in the GUI and saves it in your home directory re-use in future data wrangling.

    Every time you successfully import a DataFrame, the Data Import Tool automatically saves a generated Python script in your home directory. This way, you can easily review and reproduce your earlier work.

  5. The Data Import Tool generates a Template with every successful import. A Template is a file that contains all of the commands or actions you performed on the DataFrame and a unique Template file is generated for every unique data file. With this feature, when you load a data file, if a Template file exists corresponding to the data file, the Data Import Tool will automatically perform the operations you performed the last time. This way, you can save progress on a data file and resume your work.

Along with the feature additions discussed above, based on continued user feedback, we implemented a number of UI/UX improvements and bug fixes in this release. For a complete list of changes introduced in Version 1.0.6 of the Data Import Tool, please refer to the Release Notes page in the Tool’s documentation.

 

 


Example Use Case: Using the Data Import Tool to Speed Data Cleaning and Transformation

Now let’s take a look at how the Data Import Tool can be used to speed up the process of cleaning up and transforming data sets. As an example data set, let’s take a look at the Employee Compensation data from the city of San Francisco.

NOTE: You can follow the example step-by-step by downloading Canopy and starting a free 7 day trial of the data import tool

Step 1: Load data into the Data Import Tool

import-data-canopy-menuFirst we’ll download the data as a .csv file from the San Francisco Government data website, then open it from File -> Import Data -> From File… menu item in the Canopy Editor (see screenshot at right).

After loading the file, you should see the DataFrame below in the Data Import Tool:
data-frame-view
Continue reading

Canopy Data Import Tool: New Updates

In May of 2016 we released the Canopy Data Import Tool, a significant new feature of our Canopy graphical analysis environment software. With the Data Import Tool, users can now quickly and easily import CSVs and other structured text files into Pandas DataFrames through a graphical interface, manipulate the data, and create reusable Python scripts to speed future data wrangling.

Watch a 2-minute demo video to see how the Canopy Data Import Tool works:

With the latest version of the Data Import Tool released this month (v. 1.0.4), we’ve added new capabilities and enhancements, including:

  1. The ability to select and import a specific table from among multiple tables on a webpage,
  2. Intelligent alerts regarding the saved state of exported Python code, and
  3. Unlimited file sizes supported for import.

Download Canopy and start a free 7 day trial of the data import tool Continue reading

Webinar: Fast Forward Through the “Dirty Work” of Data Analysis: New Python Data Import and Manipulation Tool Makes Short Work of Data Munging Drudgery

Python Import & Manipulation Tool Intro Webinar

Whether you are a data scientist, quantitative analyst, or an engineer, or if you are evaluating consumer purchase behavior, stock portfolios, or design simulation results, your data analysis workflow probably looks a lot like this:

Acquire > Wrangle > Analyze and Model > Share and Refine > Publish

The problem is that often 50 to 80 percent of time is spent wading through the tedium of the first two stepsacquiring and wrangling data – before even getting to the real work of analysis and insight. (See The New York Times, For Big-Data Scientists, ‘Janitor Work’ Is Key Hurdle to Insights)

WHAT YOU’LL LEARN:

Enthought Canopy Data Import Tool

Try the Data Import Tool with your own data. Download here.

In this webinar we’ll demonstrate how the new Canopy Data Import Tool can significantly reduce the time you spend on data analysis “dirty work,” by helping you:

  • Load various data file types and URLs containing embedded tables into Pandas DataFrames
  • Perform common data munging tasks that improve raw data
  • Handle complicated and/or messy data
  • Extend the work done with the tool to other data files

WEBINAR RECORDING:
Continue reading

Enthought Canopy 1.4 Released: Includes New Canopy-Configured Command Prompt

Enthought Canopy Product Page | Download Enthought Canopy

Enthought Canopy Update AvailableEnthought Canopy 1.4 is now available! Users can easily update to this latest version by clicking on the green “Update available” link at the bottom right of the Canopy intro screen window or by going to Help > Canopy Application Updates within the application.

Key additions in this release are a Canopy-configured command prompt, inclusion of new packages in the full installer utilized by IT groups and users running from disconnected networks, and continued stability upgrades. We’ve also updated or added over 50 supported packages in Canopy’s Package Manager on a continual basis since the v.1.3 release. See the full release notes and the full list of currently available Canopy packages.

New Canopy-Configured Command Prompt

Enthought Canopy Command PromptAn important usability feature added in Enthought Canopy 1.4 is a Canopy-configured command prompt available from the Canopy Editor window on all platforms via Tools > Command Prompt. When selected, this opens a Command Prompt (Windows) or Terminal (Linux, Mac OS) window pre-configured with the correct environment settings to use Canopy’s Python installation from the command line. This avoids having to modify your login environment variables. In particular, on Windows when using standard (ie, non-administrative) user accounts it can be difficult to override some system settings. Continue reading

PyXLL: Deploy Python to Excel Easily

PyXLL Solution Home | Buy PyXLL | Press Release

Today Enthought announced that it is now the worldwide distributor for PyXLL, and we’re excited to offer this key product for deploying Python models, algorithms and code to Excel. Technical teams can use the full power of Enthought Canopy, or another Python distro, and end-users can access the results in their familiar Excel environment. And it’s straightforward to set up and use.

Installing PyXLL from Enthought Canopy

PyXLL is available as a package subscription (with significant discounts for multiple users). Once you’ve purchased a subscription you can easily install it via Canopy’s Package Manager as shown in the screenshots below (note that at this time PyXLL is only available for Windows users). The rest of the configuration instructions are in the Quick Start portion of the documentation. PyXLL itself is a plug-in to Excel. When you start Excel, PyXLL loads into Excel and reads in Python modules that you have created for PyXLL. This makes PyXLL especially useful for organizations that want to manage their code centrally and deploy to multiple Excel users.

Enthought Canopy Package Manager   Install PyXLL from Enthought Canopy's Package Manager

Creating Excel Functions with PyXLL

To create a PyXLL Python Excel function, you use the @xl_func decorator to tell PyXLL the following function should be registered with Excel, what its argument types are, and optionally what its return type is. PyXLL also reads the function’s docstring and provides that in the Excel function description. As an example, I created a module my_pyxll_module.py and registered it with PyXLL via the Continue reading

Enthought Canopy 1.3 Released: Includes Move to Python 2.7.6

Enthought Canopy Product Page | Download Enthought Canopy

Enthought Canopy 1.3 is now available and users should see the update notification in the bottom right corner of the Canopy welcome screen (as shown in the image below). This is a fairly small update primarily focused on bug fixing and stability improvement. The biggest change is the move to Python 2.7.6 from 2.7.3.

Enthought Canopy Update Available Notification
The bottom right of the Enthought Canopy window notifies users to available updates

Python 2.7.6 rolls up a couple of minor updates to the core Python environment. The most important changes from our perspective are a number of security fixes required by some users as well as fixes for Mac OS “Mavericks.” Details can be found in the Python release notes, but in general the change should be transparent to most users. The only caveat is for users building Python eggs with native C or FORTRAN extensions and publishing those eggs to users who may still be running earlier versions of Canopy or Python 2.7.3 in general. In this case, it is safest to continue building against earlier versions of Canopy.

But isn’t updating Python versions painful you may ask? In the past, yes, updating to a new Python version often required a new Python install and then re-installing all of your custom packages. However, with Canopy’s auto-update mechanism, it’s simply a matter of clicking the “Update available” link and choosing “Install and relaunch” or “Install after quit.” Canopy will automatically update the core Python installation and restart without impacting your environment. Additionally, whether you are running Canopy 1.1, 1.1.1, or 1.2, Canopy will jump straight to 1.3 and get you all of the latest updates.

We encourage all users to update to Canopy 1.3 as the 1.2 and 1.3 versions include a large number of stability fixes as well as cleaning up a lot of other less serious, but still important aspects of the user experience. For those new to Canopy, you can get Canopy here.

Enthought Canopy makes Python updates convenient
Enthought Canopy makes updates convenient with automatic downloads that install without impacting user environments

Keep up with all of the latest news from Enthought on our social media channels:  Linked In | Twitter | Google+ | Facebook | YouTube

Enthought Canopy v1.2 is Out: PTVS, Mavericks, and Qt

Author: Jason McCampbell

Canopy 1.2 is out! The release of Mac OS “Mavericks” as a free update broke a few features, primarily IPython, so we held the release to try to make sure everything worked. That ended up taking longer than we wanted, but 1.2 is finally out and adds support for Mavericks. There is one Mavericks-specific, Qt font issue that we are working on correcting which causes the wrong system font to be selected so UI’s look less-nice than they should.

Enthought Canopy integrated into PTVS

Enthought Canopy integrated into PTVS

The biggest new feature is integration with Microsoft’s Python Tools for Visual Studio (PTVS) package. PTVS is a full, professional-grade development IDE for Python based on Visual Studio and provides mixed Python/C debugging. The ability to do mixed-mode debugging is a huge boon to software developers creating C (or FORTRAN) extensions to Python. Canopy v1.2 includes a custom DLL that allows us to integrate more completely with PTVS and solves some issues with auto-completion of Python standard library calls.

Beyond PTVS, we have added the Qt development tools, such as qmake and the UIC compiler, to the Canopy installation tree. These tools are available on all platforms now and enable Qt developers to access them from Canopy directly rather than having to build the tools themselves.

Canopy 1.2 includes a large number of smaller additions and stability improvements. Highlights can be found in the release notes and we encourage all users to update existing installs. As always, thanks for using Canopy and please don’t hesitate to drop us a note letting us know what you like or what you would like to see improved. You can contact us via the Help -> Suggestions/Feedback menu item or by sending email to canopy.support@enthought.com.

And you can download Canopy from the Enthought Store page.

Python at Inflection Point in HPC

Authors: Kurt Smith, Robert Grant, and Lauren Johnson

We attended SuperComputing 2013, held November 17-22 in Denver, and saw huge interest around Python. There were several Python related events, including the “Python in HPC” tutorial (Monday), the Python BoF (Tuesday), and a “Python for HPC” workshop held in parallel with the tutorial on Monday. But we had some of our best conversations on the trade show floor.

Python Buzz on the Floor

The Enthought booth had a prominent “Python for HPC: High Productivity Computing” headline, and we looped videos of our parallelized 2D Julia set rendering GUI (video below).  The parallelization used Cython’s OpenMP functionality, came in at around 200 lines of code, and generated lots of discussions.  We also used a laptop to display an animated 3D Julia set rendered in Mayavi and to demo Canopy.

Many people came up to us after seeing our banner and video and asked “I use Python a little bit, but never in HPC – what can you tell me?”  We spoke with hundreds of people and had lots of good conversations.

It really seems like Python has reached an inflection point in HPC.

Python in HPC Tutorial, Monday

Kurt Smith presented a 1/4 day section on Cython, which was a shortened version of what he presented at SciPy 2013.  In addition, Andy Terrel presented “Introduction to Python”; Aron Ahmadia presented “Scaling Python with MPI”; and Travis Oliphant presented “Python and Big Data”. You can find all the material on the PyHPC.org website.

The tutorial was generally well attended: about 100–130 people.  A strong majority of attendees were already programming in Python, with about half using Python in a performance-critical area and perhaps 10% running Python on supercomputers or clusters directly.

In the Cython section of the tutorial, Kurt went into more detail on how to use OpenMP with Cython, which was of interest to many based on questions during the presentation. For the exercises, students were given temporary accounts on  Stampede (TACC’s latest state-of-the-art supercomputer) to help ensure everyone was able to get their exercise environment working.

Andy’s section of the day went well, covering the basics of using Python.  Aron’s section was good for establishing that Python+MPI4Py can scale to ~65,000 nodes on massive supercomputers, and also for adressing people’s concerns regarding the import challenge.

Python in HPC workshop, Monday

There was a day-long workshop of presentations on “Python in HPC” which ran in parallel with the “Python for HPC” tutorial. Of particular interest were the talks on “Doubling the performance of NumPy” and “Bohrium: Unmodified NumPy code on CPU, GPU, and Cluster“.

Python for High Performance and Scientific Computing BoF, Tuesday

Andy Terrel, William Scullin, and Andreas Schreiber organized a Birds-of-a-Feather session on Python, which had about 150 attendees (many thanks to all three for organizing a great session!).  Kurt gave a lightning talk on Enthought’s SBIR work.  The other talks focused on applications of Python in HPC settings, as well as IPython notebooks on the basics of the Navier-Stokes equations.

It was great to see so much interest in Python for HPC!