Category Archives: SciPy

SciPy 2017 Conference to Showcase Leading Edge Developments in Scientific Computing with Python

Renowned scientists, engineers and researchers from around the world to gather July 10-16, 2017 in Austin, TX to share and collaborate to advance scientific computing tool


AUSTIN, TX – June 6, 2017 –
Enthought, as Institutional Sponsor, today announced the SciPy 2017 Conference will be held July 10-16, 2017 in Austin, Texas. At this 16th annual installment of the conference, scientists, engineers, data scientists and researchers will participate in tutorials, talks and developer sprints designed to foster the continued rapid growth of the scientific Python ecosystem. This year’s attendees hail from over 25 countries and represent academia, government, national research laboratories, and industries such as aerospace, biotechnology, finance, oil and gas and more.

“Since 2001, the SciPy Conference has been a highly anticipated annual event for the scientific and analytic computing community,” states Dr. Eric Jones, CEO at Enthought and SciPy Conference co-founder. “Over the last 16 years we’ve witnessed Python emerge as the de facto open source programming language for science, engineering and analytics with widespread adoption in research and industry. The powerful tools and libraries the SciPy community has developed are used by millions of people to advance scientific inquest and innovation every day.”

Special topical themes for this year’s conference are “Artificial Intelligence and Machine Learning Applications” and the “Scientific Python (SciPy) Tool Stack.” Keynote speakers include:

  • Kathryn Huff, Assistant Professor in the Department of Nuclear, Plasma, and Radiological Engineering at the University of Illinois at Urbana-Champaign  
  • Sean Gulick, Research Professor at the Institute for Geophysics at the University of Texas at Austin
  • Gaël Varoquaux, faculty researcher in the Neurospin brain research institute at INRIA (French Institute for Research in Computer Science and Automation)

In addition to the special conference themes, there will also be over 100 talk and poster paper speakers/presenters covering eight mini-symposia tracks including: Astronomy; Biology, Biophysics, and Biostatistics; Computational Science and Numerical Techniques; Data Science; Earth, Ocean, and Geo Sciences; Materials Science and Engineering; Neuroscience; and Open Data and Reproducibility.

New for 2017 is a sold-out “Teen Track,” a two-day curriculum designed to inspire the scientists of tomorrow.  From July 10-11, high school students will learn more about the Python language and how developers solve real world scientific problems using Python and its scientific libraries.

Conference and tutorial registration is open at https://scipy2017.scipy.org.

About the SciPy Conference

SciPy 2017, the sixteenth annual Scientific Computing with Python conference, will be held July 10-16, 2017 in Austin, Texas. SciPy is a community dedicated to the advancement of scientific computing through open source Python software for mathematics, science and engineering. The annual SciPy Conference allows participants from all types of organizations to showcase their latest projects, learn from skilled users and developers and collaborate on code development. For more information or to register, visit https://scipy2017.scipy.org.

About Enthought

Enthought is a global leader in scientific and analytic software, consulting and training solutions serving a customer base comprised of some of the most respected names in the oil and gas, manufacturing, financial services, aerospace, military, government, biotechnology, consumer products and technology industries. The company was founded in 2001 and is headquartered in Austin, Texas, with additional offices in Cambridge, United Kingdom and Pune, India. For more information visit www.enthought.com and connect with Enthought on Twitter, LinkedIn, Google+, Facebook and YouTube.

 

 

Webinar – Python for Professionals: The Complete Guide to Enthought’s Technical Training Courses

View the Python for Professionals Webinar

What: Presentation and Q&A with Dr. Michael Connell, VP, Enthought Training Solutions
Who Should Watch: Anyone who wants to develop proficiency in Python for scientific, engineering, analytic, quantitative, or data science applications, including team leaders considering Python training for a group, learning and development coordinators supporting technical teams, or individuals who want to develop their Python skills for professional applications

View Recording  


Python is an uniquely flexible language – it can be used for everything from software engineering (writing applications) to web app development, system administration to “scientific computing” — which includes scientific analysis, engineering, modeling, data analysis, data science, and the like.

Unlike some “generalist” providers who teach generic Python to the lowest common denominator across all these roles, Enthought specializes in Python training for professionals in scientific and analytic fields. In fact, that’s our DNA, as we are first and foremost scientists, engineers, and data scientists ourselves, who just happen to use Python to drive our daily data wrangling, modeling, machine learning, numerical analysis, simulation, and more.

If you’re a professional using Python, you’ve probably had the thought, “how can I be better, smarter, and faster in using Python to get my work done?” That’s where Enthought comes in – we know that you don’t just want to learn generic Python syntax, but instead you want to learn the key tools that fit the work you do, you want hard-won expert insights and tips without having to discover them yourself through trial and error, and you want to be able to immediately apply what you learn to your work.

Bottom line: you want results and you want the best value for your invested time and money. These are some of the guiding principles in our approach to training.

In this webinar, we’ll give you the information you need to decide whether Enthought’s Python training is the right solution for your or your team’s unique situation, helping answer questions such as:

  • What kinds of Python training does Enthought offer? Who is it designed for? 
  • Who will benefit most from Enthought’s training (current skill levels, roles, job functions)?
  • What are the key things that make Enthought’s training different from other providers and resources?
  • What are the differences between Enthought’s training courses and who is each one best for?
  • What specific skills will I have after taking an Enthought training course?
  • Will I enjoy the curriculum, the way the information is presented, and the instructor?
  • Why do people choose to train with Enthought? Who has Enthought worked with and what is their feedback?

We’ll also provide a guided tour and insights about our our five primary course offerings to help you understand the fit for you or your team:

View Recording  


michael_connell-enthought-vp-training

Presenter: Dr. Michael Connell, VP, Enthought Training Solutions

Ed.D, Education, Harvard University
M.S., Electrical Engineering and Computer Science, MIT


Continue reading

Webinar: Work Better, Smarter, and Faster in Python with Enthought Training on Demand

Join Us For a Webinar

Enthought Training on Demand Webinar

We’ll demonstrate how Enthought Training on Demand can help both new Python users and experienced Python developers be better, smarter, and faster at the scientific and analytic computing tasks that directly impact their daily productivity and drive results.

View a recording of the Work Better, Smarter, and Faster in Python with Enthought Training on Demand webinar here.

What You’ll Learn

Continue reading

Exploring NumPy/SciPy with the “House Location” Problem

Author: Aaron Waters

I created a Notebook that describes how to examine, illustrate, and solve a geometric mathematical problem called “House Location” using Python mathematical and numeric libraries. The discussion uses symbolic computation, visualization, and numerical computations to solve the problem while exercising the NumPy, SymPy, Matplotlib, IPython and SciPy packages.

I hope that this discussion will be accessible to people with a minimal background in programming and a high-school level background in algebra and analytic geometry. There is a brief mention of complex numbers, but the use of complex numbers is not important here except as “values to be ignored”. I also hope that this discussion illustrates how to combine different mathematically oriented Python libraries and explains how to smooth out some of the rough edges between the library interfaces.

http://nbviewer.ipython.org/urls/raw.github.com/awatters/CanopyDemoArchive/master/misc/house_locations.ipynb

Advanced Cython Recorded Webinar: Typed Memoryviews

Author: Kurt SmithWebinar_screenshot

Typed memoryviews are a new Cython feature for accessing memory buffers, such as NumPy arrays, without any Python overhead. This makes them very useful for manipulating blocks of memory in Cython directly without calling into the Python-C API.  Typed memoryviews have a clean declaration syntax and have a NumPy-like look and feel, supporting slicing, striding and indexing.

I go into more detail and provide some specific examples on how to use typed memoryviews in this webinar: “Advanced Cython: Using the new Typed Memoryviews”.

If you would like to watch the recorded webinar, you can find a link below (the different formats will play directly in different browsers so check to see which one works for you, and you won’t have to download the whole recording ahead of time):

For all you EPD Users: Canopy v1.1

EPD (Enthought Python Distribution) provided a simple install of Python for scientific computing on the major platforms: Windows, Linux and Mac-OS. Those looking for a clean, straightforward Python stack to unpack into a particular directory found EPD to be pretty ideal.

With the introduction of Enthought Canopy, we began addressing users who are more engineer or scientist than programmer and were much less familiar with command-line interfaces. The Canopy desktop (in the vein of MATLAB or Spyder) aims at these technical users who want to use Python, but more as an application or IDE. To implement the desktop in Python and to allow both it and a user-defined Python environment to co-exist and be separately updated, we used virtual environments. As a consequence Canopy can feel a bit foreign to EPD users. With 1.1 we have added a new command line interface (CLI) that will hopefully make EPD users feel more at home in Canopy while retaining many of the Canopy advantages such as in-place update and virtual environment support.

Now, EPD users who just want to use Canopy as a plain Python environment with their own tools or IDE can easily create one or more Python environments. For example, from the command line on Windows:

        Canopy_cli.exe setup C:\Python27

or on Linux:

        canopy_cli setup ~/canopy

The target directory can be any you choose. If you want to make this Python environment the default on your system, you can specify the –default switch, and Canopy will add the appropriate bin directory (Scripts directory on Windows) to your PATH environment variable. On Mac OS and Linux systems, Canopy does this by appending a line to your ~/.bash_profile file which activates the correct virtual environment. On Windows, this Python environment is also added to the system registry so third-party tools can correctly find it.

Since we use virtual environments, the installation layout for Canopy is different. With Canopy we install what is referred to as “Canopy Core”: the core Python environment and a minimum set of packages needed to bootstrap Canopy itself. With it we can lock down the Canopy environment, facilitate the automatic update mechanism, and provide reliable startup and fail-safe recovery. For the user, there is a different environment. This means when a Python update comes out, it is no longer necessary to install a whole new environment plus all of your packages and get everything working again. Instead, simply update Canopy and go back to working — all of your packages are still installed but Python has been upgraded.

To complete an install, Canopy creates two virtual environments named ‘System’ and ‘User’. System is where the Canopy GUI runs; no user code runs in this environment. Updates to this virtual environment are done via the Canopy update mechanisms. The User environment is where the kernel and all user code runs. This virtual environment is managed by Package Manager from the desktop or by enpkg from the command line; any packages can be updated and installed without fear of disrupting the GUI. Similarly, updates to the Canopy GUI will not affect packages installed in the User environment and break your code.

So why stick with virtual environments for an “EPD-like” install? One of the big challenges with the old, “flat” EPD installation method was updating an install, or trying out different package configurations. With virtual environments, you can create a new environment which inherits packages from another virtual environment, and try out a few package changes. When you are satisfied, it’s straightforward to throw away the experimentation area and make the changes to the original, stable virtual environment.

For more details, check out Creating an EPD-like Python environment in our online docs. And you can download Canopy v1.1 now.

Canopy v1.1 – Linux, Command Line Interface and More

Final-version-canopy-logo (1)

With version 1.1, Enthought Canopy now:

1) addresses, much more completely, the command line use cases that EPD users and IT managers expect from their Python distributions,

2) makes Linux support generally available,

3) streamlines installation for users without internet access with full, single-click installers,

4) supports multiple virtual environments for advanced users via “venv” backported to Python 2.7, and

5) provides updates like numpy 1.7.1, matplotlib 1.3.0 and more.

It’s been just over 4 months since Canopy v1.0 shipped with the new desktop analysis environment and our updated Python distribution for scientific computing. Canopy’s analysis environment seems to be well-received by users looking for a simpler GUI environment, but the Canopy graphical installation process left something to be desired by our EPD users.

Along with the Canopy desktop for users that don’t want to work directly from a command line, Canopy version 1.1 now provides command-line utilities that streamline the installation of a complete Python scientific stack for current EPD users who want to work from the shell or command line. In addition, IT groups or tools specialists that need to manage a central install of Python for a workgroup or department now have the tools they need to install and maintain Canopy. Version 1.1’s command-line installation and setup (and the 1-click, full installers detailed below) are much better for supporting Canopy installations on clusters as well.

Canopy for Linux is now fully released. We have full, tested support for RedHat5, CentOS5, and Ubuntu 12.04. Linux distros and versions beyond those work as well (anecdotally and based on some in-house use), but those are our tested versions.

With Canopy v1.0 we implemented a 2-step installation process. The installer includes the Canopy desktop, the Python packages needed by Canopy itself, and other core scientific Python stack packages for a minimal install (the libraries in Canopy Express). For those with a subscription, the second step requires downloading any additional packages using the Package Manager. This 2-step process is problematic for users that don’t have easy internet access or need to install centrally for a group. To help, we now provide full installers with all the Python packages we support included. This provides a streamlined 1-step install process for those who need it or want it.

To ensure users can install any package updates they wish without messing up package dependencies for Canopy itself, we use virtual environments under the hood. With v1.1 we now provide command-line access to our backport of “venv”. The new CLI provides utilities to create, upgrade, activate and deactivate your own virtual environments. Now its much easier to try out new Python environments or set up multiple configurations for a workgroup.

Canopy v1.1 ships many updates to packages and many new ones: OpenCV, LLVM, Bottleneck, gevent, msgpack, py, pytest, six, NLTK, Numba, Mock, patsy and more. You can see the full details on the Canopy Package Index page.

We hope you find version 1.1 useful!

Raspberry Pi Sensor and Actuator Control

Author: Jack Minardi

I gave a talk at SciPy 2013 titled open('dev/real_world') Raspberry Pi Sensor and Actuator Control. You can find the video on youtube, the slides on google drive and I will summarize the content here.

Typically as a programmer you will work with data on disk, and if you are lucky you will draw pictures on the screen. This is in contrast to physical computing which allows you as a programmer to work with data sensed from the real world and with data sent to control devices that move in the real world.

Mars Rover

physical computing at work. (source)

Goal

Use a Raspberry Pi to read in accelerometer value and to control a servo motor.

Definitions

  • Raspberry Pi
    • Small $35 Linux computer with 2 USB ports, HDMI out, Ethernet, and most importantly…
  • GPIO Pins
    • General Purpose Input/Output Pins
    • This is the component that truly enables “physical computing”. You as a programmer can set the voltage high or low on each pin, which is how you will talk to actuators. You can also read what the voltage is currently on each pin. This is how sensors will talk back to you. It is important to note that each pin represents a binary state, you can only output a 0 or a 1, nothing in between.

In this article I will go over four basic Python projects to demonstrate the hardware capabilities of the Raspberry Pi. Those projects are:

  • Blink an LED.
  • Read a pot (potentiometer).
  • Stream data.
  • Control a servo.

Blink an LED.

An LED is a Light Emitting Diode. A diode is a circuit element that allows current to flow in one direction but not the other. Light emitting means … it emits light. Your typical LED needs current in the range of 10-30 mA and will drop about 2-3 volts. If you connect an LED directly to your Pi’s GPIO it will source much more than 30 mA and will probably fry your LED. To prevent this we have to put a resistor. If you want to do math you can calculate the appropriate resistance using the following equation:

R = (Vs - Vd) / I

But if you don’t want to do math then pick a resistor between 500-1500 ohms. Once you’ve gathered up all your circuit elements (LED and resistor), build this circuit on a breadboard:

LED Circuit

thats not so bad, is it?

The code is also pretty simple. But first you will need to install RPi.GPIO. (It might come preinstalled on your OS.)

import time
from itertools import cycle
import RPi.GPIO as io
io.setmode(io.BCM)
io.setup(12, io.OUT)
o = cycle([1, 0])
while True:
    io.output(12, o.next())
    time.sleep(0.5)

The important lines basically are:

io.setup(12, io.OUT)
io.output(12, 1)

These lines of code setup pin 12 as an output, and then output a 1 (3.3 volts). Run the above code connected to the circuit and you should see your LED blinking on and off every half second.


Read a pot.

A pot is short for potentiometer, which is a variable resistor. This is just a fancy word for knob. Basically by turning the knob you affect the resistance, which affects the voltage across the pot. (V = IR, remember?). Changing voltage relative to some physical value is how many sensors work, and this class of sensor is known as an analog sensor. Remember when I said the GPIO pins can only represent a binary state? We will have to call in the aide of some more silicon to convert that analog voltage value into a binary stream of bits our Pi can handle.

That chunk of silicon is refered to as an Analog-to-Digital Converter (ADC). The one I like is called MCP3008, it has 8 10-bit channels, meaning we can read 8 sensors values with a resolution of 1024 each (2^10). This will map our input voltage of 0 – 3.3 volts to an integer between 0 and 1023.

LED Circuit

I’ve turned the Pi into ephemeral yellow labels to simplify the diagram

To talk to the chip we will need a python package called spidev. For more information about the package and how it works with the MCP3008 check out this great blog post

With spidev installed and the circuit built, run the following program to read live sensor values and print them to stdout.

import spidev
import time
spi = spidev.SpiDev()
spi.open(0,0)
def readadc(adcnum):
    if not 0 <= adcnum <= 7:
        return -1
    r = spi.xfer2([1, (8+adcnum)<<4, 0])
    adcout = ((r[1] & 3) << 8) + r[2]
    return adcout
while True:
    val = readadc(0)
    print val
    time.sleep(0.5)

The most important parts are these two lines:

r = spi.xfer2([1, (8+adcnum)<<4, 0])
adcout = ((r[1] & 3) << 8) + r[2]

They send the read command and extract the relevant returned bits. See the blog post I linked above for more information on what is going on here.


Stream data.

To stream data over the wire we will be using the ØMQ networking library and implementing the REQUEST/REPLY pattern. ØMQ makes it super simple to set up a client and server in Python. The following is a complete working example.

Server

import zmq
context = zmq.Context()
socket = context.socket(
    zmq.REP)
socket.bind('tcp://*:1980')
while True:
    message = socket.recv()
    print message
    socket.send("I'm here")

Client

import zmq
context = zmq.Context()
socket = context.socket(
    zmq.REQ)
a = 'tcp://192.168.1.6:1980'
socket.connect(a)
for request in range(10):
    socket.send('You home?')
    message = socket.recv()
    print message

Now we can use traits and enaml to make a pretty UI on the client side. Check out the acc_plot demo in the github repo to see an example of the Pi streaming data over the wire to be plotted by a client.


Control a servo

Servos are (often small) motors which you can drive to certain positions. For example, for a given servo you may be able to set the drive shaft from 0 to 18o degrees, or anywhere in between. As you can imagine, this could be useful for a lot of tasks, not least of which is robotics.

Shaft rotation is controlled by Pulse Width Modulation (PWM) in which you encode information in the duration of a high voltage pulse on the GPIO pins. Most hobby servos follow a standard pulse width meaning. A 0.5 ms pulse means go to your min position and a 2.5 ms pulse means go to your max position. Now repeat this pulse every 20 ms and you’re controlling a servo.

PWM Diagram

The pulse width is much more critical than the frequency

These kind of timings are not possible with Python. In fact, they aren’t really possible with a modern operating system. An interrupt could come in at any time in your control code, causing a longer than desired pulse and a jitter in your servo. To meet the timing requirements we have to enter the fun world of kernel modules. ServoBlaster is a kernel module that makes use of the DMA control blocks to bypass the CPU entirely. When loaded, the kernel module opens a device file at /dev/servoblaster that you can write position commands to.

I’ve written a small object oriented layer around this that makes servo control simpler. You can find my library here:

https://github.com/jminardi/RobotBrain

Simple connect the servo to 5v and ground on your Pi and then connect the control wire to pin 4.

Servo Diagram

The python code is quite simple:

import time
import numpy as np
from robot_brain.servo import Servo
servo = Servo(0, min=60, max=200)
for val in np.arange(0, 1, 0.05):
    servo.set(val)
    time.sleep(0.1)

All you have to do is instantiate a servo and call its set() method with a floating point value between 0 and 1. Check out the servo_slider demo on github to see servo control implemented over the network.