Category Archives: General

The Latest Features in Virtual Core: CT Scan, Photo, and Well Log Co-visualization

Enthought is pleased to announce Virtual Core 1.8.  Virtual Core automates aspects of core description for geologists, drastically reducing the time and effort required for core description, and its unified visualization interface displays cleansed whole-core CT data alongside core photographs and well logs.  It provides tools for geoscientists to analyze core data and extract features from sub-millimeter scale to the entire core.

 

NEW VIRTUAL CORE 1.8 FEATURE: Rotational Alignment on Core CT Sections

Virtual Core 1.8 introduces the ability to perform rotational alignment on core CT sections.  Core sections can become misaligned during extraction and data acquisition.   The alignment tool allows manual realignment of the individual core sections.  Wellbore image logs (like FMI) can be imported and used as a reference when aligning core sections.  The Digital Log Interchange Standard (DLIS) is now fully supported, and can be used to import and export data.

 

Whole-core CT scans are routinely performed on extracted well cores.  The data produced from these scans is typically presented as static 2D images of cross sections and video scans.  Images are limited to those provided by the vendor, and the raw data, if supplied, is difficult to analyze.  However, the CT volume is a rich 3D dataset of compositional and textural information that can be incorporated into core description and analysis workflows.

Enthought’s proprietary Clear Core technology is used to process the raw CT data, which is notoriously difficult to analyze.  Raw CT data is stored in 3 foot sections, with each section consisting of many thousands of individual slice images which are approximately .2 mm thick.  This data is first combined to create a contiguous volume of the entire core.  The volume is then analyzed to remove the core barrel and mud as well as correcting for scanning artifacts such as beam hardening.  The image below shows data before and after Clear Core processing.

Clear Core processing prepares CT data for additional analysis.

Automated feature detection is performed during processing to identify bed boundaries, lamination, dip angle and textural features of the core.  A number of advanced machine learning algorithms and image analysis techniques are used during this step.  It is also possible to perform feature detection on core photographs.

Virtual Core provides an integrated environment for the co-visualization of the CT data along with high resolution core photographs (white light and UV) and well logs.  Data can be imported using a variety of industry standard formats, such as LAS and DLIS.  Thin section images, plug data and custom annotations can be added and viewed at specific depths along with the core data.  A CT volume viewer provides a full 3D rendering of the interior of the core to investigate bioturbation and sedimentary structures.

NEW VIRTUAL CORE 1.8 FEATURE:  MACHINE LEARNING AND CLASSIFICATION TOOL

Virtual Core 1.8 also includes an updated machine learning and classification tool.  This feature provides an interface for a user to identify a lithology class of interest, and then automatically determines whether other regions in the entire core belong to the class or not.  This can be used to rapidly identify intervals that have certain features in common, such as bedding structures or density composition.

Stay tuned in the coming weeks for more details on the specific capabilities and features of Virtual Core.  If you would like more information please get in touch with us.  We’d be happy to schedule a demonstration and discuss how Virtual Core can help you unlock your core CT data.

 

The Latest and Greatest Pandas Features (since v 0.11)

On May 28, 2014 Phillip Cloud, core contributor for the Pandas data analytics Python library, spoke at a joint meetup of the New York Quantitative Python User’s Group (NY QPUG) and the NY Finance PUG. Enthought hosted and about 60 people joined us to listen to Phillip present some of the less-well-known, but really useful features that have come out since Pandas version 0.11 and some that are coming soon. We all learned more about how to take full advantage of the Pandas Python library, and got a better sense of how excited Phillip was to discover Pandas during his graduate work.

Pandas to MATLAB

After a fairly comprehensive overview of Pandas, Phillip got into the new features. In version 0.11 he covered: Continue reading

Implied Volatility with Python’s Pandas Library AND Python in Excel

Authors: Brett Murphy and Aaron Waters

The March 6 New York Quantitative Python User’s Group (NY QPUG) Meetup included presentations by NAG (Numerical Algorithms Group), known for its high quality numerical computing software and high performance computing (HPC) services, and Enthought, a provider of scientific computing solutions powered by Python.

Brian Spector, a technical consultant at NAG, presented “Implied Volatility using Python’s Pandas Library.” He covered a technique and script for calculating implied volatility for option prices in the Black–Scholes formula using Pandas and nag4py. With this technique, you can determine for what volatility the Black–Scholes equation price equals the market price. This volatility is then denoted as the implied volatility observed in the market. Brian fitted varying degrees of polynomials to the volatility curves, then examined the volatility surface and its sensitivity with respect to the interest rate. See the full presentation in the video below:

Continue reading

Python at Inflection Point in HPC

Authors: Kurt Smith, Robert Grant, and Lauren Johnson

We attended SuperComputing 2013, held November 17-22 in Denver, and saw huge interest around Python. There were several Python related events, including the “Python in HPC” tutorial (Monday), the Python BoF (Tuesday), and a “Python for HPC” workshop held in parallel with the tutorial on Monday. But we had some of our best conversations on the trade show floor.

Python Buzz on the Floor

The Enthought booth had a prominent “Python for HPC: High Productivity Computing” headline, and we looped videos of our parallelized 2D Julia set rendering GUI (video below).  The parallelization used Cython’s OpenMP functionality, came in at around 200 lines of code, and generated lots of discussions.  We also used a laptop to display an animated 3D Julia set rendered in Mayavi and to demo Canopy.

Many people came up to us after seeing our banner and video and asked “I use Python a little bit, but never in HPC – what can you tell me?”  We spoke with hundreds of people and had lots of good conversations.

It really seems like Python has reached an inflection point in HPC.

Python in HPC Tutorial, Monday

Kurt Smith presented a 1/4 day section on Cython, which was a shortened version of what he presented at SciPy 2013.  In addition, Andy Terrel presented “Introduction to Python”; Aron Ahmadia presented “Scaling Python with MPI”; and Travis Oliphant presented “Python and Big Data”. You can find all the material on the PyHPC.org website.

The tutorial was generally well attended: about 100–130 people.  A strong majority of attendees were already programming in Python, with about half using Python in a performance-critical area and perhaps 10% running Python on supercomputers or clusters directly.

In the Cython section of the tutorial, Kurt went into more detail on how to use OpenMP with Cython, which was of interest to many based on questions during the presentation. For the exercises, students were given temporary accounts on  Stampede (TACC’s latest state-of-the-art supercomputer) to help ensure everyone was able to get their exercise environment working.

Andy’s section of the day went well, covering the basics of using Python.  Aron’s section was good for establishing that Python+MPI4Py can scale to ~65,000 nodes on massive supercomputers, and also for adressing people’s concerns regarding the import challenge.

Python in HPC workshop, Monday

There was a day-long workshop of presentations on “Python in HPC” which ran in parallel with the “Python for HPC” tutorial. Of particular interest were the talks on “Doubling the performance of NumPy” and “Bohrium: Unmodified NumPy code on CPU, GPU, and Cluster“.

Python for High Performance and Scientific Computing BoF, Tuesday

Andy Terrel, William Scullin, and Andreas Schreiber organized a Birds-of-a-Feather session on Python, which had about 150 attendees (many thanks to all three for organizing a great session!).  Kurt gave a lightning talk on Enthought’s SBIR work.  The other talks focused on applications of Python in HPC settings, as well as IPython notebooks on the basics of the Navier-Stokes equations.

It was great to see so much interest in Python for HPC!

PyQL and QuantLib: A Comprehensive Finance Framework

Authors: Kelsey Jordahl, Brett Murphy

Earlier this month at the first New York Finance Python User’s Group (NY FPUG) meetup, Kelsey Jordahl talked about how PyQL streamlines the development of Python-based finance applications using QuantLib. There were about 30 people attending the talk at the Cornell Club in New York City. We have a recording of the presentation below.

FPUG Meetup Presentation Screenshot

QuantLib is a free, open-source (BSD-licensed) quantitative finance package. It provides tools for financial instruments, yield curves, pricing engines, creating simulations, and date / time management. There is a lot more detail on the QuantLib website along with the latest downloads. Kelsey refers to a really useful blog / open-source book by one of the core QuantLib developers on implementing QuantLib. Quantlib also comes with different language bindings, including Python.

So why use PyQL if there are already Python bindings in QuantLib? Well, PyQL provides a much more Pythonic set of APIs, in short. Kelsey discusses some of the differences between the original QuantLib Python API and the PyQL API and how PyQL streamlines the resulting Python code. You get better integration with other packages like NumPy, better namespace usage and better documentation. PyQL is available up on GitHub in the PyQL repo. Kelsey uses the IPython Notebooks in the examples directory to explore PyQL and QuantLib and compares the use of PyQL versus the standard (SWIG) QuantLib Python APIs.

PyQL remains a work in progress, with goals to make its QuantLib coverage more complete, the API even more Pythonic, and getting a successful build on Windows (works on Mac OS and Linux now). It’s open source, so feel free to step up and contribute!

For the details, check out the video of Kelsey’s presentation (44 minutes).

And here are the slides online if you want to check the links in the presentation.

If you are interested in working on either QuantLib or PyQL, let the maintainers know!

Exploring NumPy/SciPy with the “House Location” Problem

Author: Aaron Waters

I created a Notebook that describes how to examine, illustrate, and solve a geometric mathematical problem called “House Location” using Python mathematical and numeric libraries. The discussion uses symbolic computation, visualization, and numerical computations to solve the problem while exercising the NumPy, SymPy, Matplotlib, IPython and SciPy packages.

I hope that this discussion will be accessible to people with a minimal background in programming and a high-school level background in algebra and analytic geometry. There is a brief mention of complex numbers, but the use of complex numbers is not important here except as “values to be ignored”. I also hope that this discussion illustrates how to combine different mathematically oriented Python libraries and explains how to smooth out some of the rough edges between the library interfaces.

http://nbviewer.ipython.org/urls/raw.github.com/awatters/CanopyDemoArchive/master/misc/house_locations.ipynb

Advanced Cython Recorded Webinar: Typed Memoryviews

Author: Kurt SmithWebinar_screenshot

Typed memoryviews are a new Cython feature for accessing memory buffers, such as NumPy arrays, without any Python overhead. This makes them very useful for manipulating blocks of memory in Cython directly without calling into the Python-C API.  Typed memoryviews have a clean declaration syntax and have a NumPy-like look and feel, supporting slicing, striding and indexing.

I go into more detail and provide some specific examples on how to use typed memoryviews in this webinar: “Advanced Cython: Using the new Typed Memoryviews”.

If you would like to watch the recorded webinar, you can find a link below (the different formats will play directly in different browsers so check to see which one works for you, and you won’t have to download the whole recording ahead of time):

“venv” in Python 2.7 and how it Simplifies Life

Virtual environments, specifically ‘venv’ which we backported from Python 3.x, are a technology that enables the creation of multiple, lightweight, independent Python environments. Each virtual environment appears to be a self-contained Python installation, but loads the Python standard library and other common resources from a common base Python installation. Optionally, a virtual environment can also load packages from its base Python environment, whether that’s Canopy Core itself or another virtual environment.

What makes virtual environments so interesting? Well, they reduce disk space by not having to duplicate the full Python environment each time. But more than that, making Python environments far “lighter” enables several interesting capabilities.

First, the most common use of virtual environments is to allow separate projects to run in separate environments with different packages requirements. Each Python application runs in a separate virtual environment so package updates needed for one application don’t break the others. This model has long been used by web developers as well as a few scientific software developers.

The second case is specifically enabled by Canopy. Sharp-eyed readers will have noted in the first paragraph that we said that a virtual environment can have Canopy Core or another virtual environment as the base. But virtual environments can’t be layered, right? Now they can.

We have extended venv to support arbitrary numbers of layers, so we can do something like this:

'venv' in Canopy

‘venv’ in Canopy

‘Project1’ can be created with the following Canopy command:

canopy_cli  setup  ./Project1

Canopy constructs Project1 with all of the standard Canopy packages installed, and Project1 can now be customized to run the application. Once we’ve got Project1 working with a particular Python configuration, what if we want to see if the application works with the latest version of NumPy? We could update and potentially break the stable environment. Or, we can do this:

./Project1/bin/ venv  -s  ./Project1_play

Now ‘Project1_Play’ is a virtual environment which has by default all of Project1’s packages and package versions available. We can now update NumPy or other packages in Project1_play and test the application. If it doesn’t work, no big deal, we just delete it. We now have the ability to rapidly experiment with different (safe) Python environments without breaking our stable working area.

Canopy makes use of virtual environments to provide a protected Python environment for the Canopy GUI application to run in, and to provide one or more User Python environments which you can customize and run your own code in. Canopy Core is the base for each of these virtual environments, providing the core Python components and several common, large packages such as Qt and PySide. This structure means that the Canopy GUI can be updated without impacting your code, and any package updates you install won’t destabilize the Canopy GUI.

Canopy Core can be updated if you want, such as to move to a new version of Python, and each of the virtual environments will be updated automatically as well. This eliminates the need to install a new Python environment and then re-install any third-party packages into that new environment just to update Python.

For more information on how to set up virtual environments with Canopy, check the online docs, or get Canopy v1.1 and try it out.

Our next post will detail how to use Canopy and virtual environments to set up multi-user networks and cluster environments.