OK, this is a work in progress, but I wanted to take some notes on a little project. I love the internet archive (https://archive.org/), especially now that I have a Kindle and a Kindle Fire. I also really like their online bookreader, which is a visually pleasing javascript browser for original images of scanned book pages. It is especially nice for books in which layout, typesetting, images, etc. are important (since these are generally lost or mangled in OCR-based reflowable text files for eReaders). I recently read The perception of the visual world by James Gibson entirely online using it!
Great, but what if I want to be able to read when I am not connected to the Internet? (I don't know why I am so worried about having local copies of some things, it is some digital hoarding tendency.) It turns out that the bookreader is an open source project, entirely downloadable from github! Some documentation is here.
Easy, I thought. I downloaded the zip file, unzipped it, and was off and running with the BookReaderDemo (double-click index.html and it should open in your browser). It took a few small tricks to get it to work with a new book downloaded from the archive. I decided to start with On growth and form by D'Arcy Wentworth Thompson. I downloaded all of the files (under All files: Torrent. I unzip the "ongrowthform1917thom_flippy.zip" folder, which contains a bunch of .jpg files of the pages. I copy them all into a directory called "jpgImages" inside "BookReaderDemo". Then I change BookReaderJSSimple.js in the following ways:
line 11 to return 365;
line 16 to return 600;
line 80 to br.numLeafs = 708;
(Just changing the width, height, and number of pages here, not entirely sure these are correct.)
Finally the important part, telling it where to find the new images:
line 28 to var url = 'jpgImages/'+leafStr.replace(re, imgStr) + '.jpg';
This worked, but the pages all looked low-quality and blurry. I poked around, and it looks like internet archive actually uses jpeg 2000 (.jp2) for the real full-quality images. These are stored in the original download in the ongrowthform1917thom_jp2.zip file. I unzipped this, the problem is that just opening the new directory crashes nautilus! I guess jp2 hasn't exactly gotten widely accepted... I downloaded imagemagick by running sudo apt-get install imagemagick
Now I could navigate to the directory containing the .jp2 files and convert them to .jpg with the command mogrify -format jpg *.jp2
Now if I change line 80 to var url = 'jp2Images/ongrowthform1917thom_'+leafStr.replace(re, imgStr) + '.jpg';
(after copying the jp2 directory to BookReaderDemo and renaming it jp2Images) it looks like I get a functioning bookreader! I'm sure their implementation with jp2 files is faster and has other advantages, but I'm just happy to get something working relatively quickly.
Finally, let's see how well it works to embed the book (still hosted on archive.org) here:
A digital lab notebook. Topics range from scientific computing using Python to neurobiology of Drosophila.
Thursday, October 31, 2013
Tuesday, October 29, 2013
Little change to Panel_com
I just realized that I get annoyed every time I make a pattern and have to watch as the x- and y- indices scroll past as the Make_pattern_vector.m loops. So I changed lines ~58-62 from
to
for index_x = 1:NumPatsX
 for index_y = 1:NumPatsY
  [ index_x index_y ]
to
['x progress ',char(61)*ones(1,40)]
for index_x = 1:NumPatsX
 [char(8)*ones(1,10+51),'x progress ',char(62)*ones(1,round(index_x*40/NumPatsX)),char(61)*ones(1,round(40-index_x*40/NumPatsX))]
  for index_y = 1:NumPatsY
Saturday, October 12, 2013
My new paper is out!
I just checked, and it looks like my paper on central complex visual responses during flight has been out since Wednesday! Here is the link to it on the Journal of Neurophysiology website:
http://jn.physiology.org/content/early/2013/10/04/jn.00593.2013.long
and here is the link to it on pubmed:
http://www.ncbi.nlm.nih.gov/pubmed/24108792
In it we report responses in the ExFl1 (aka F1) neurons in the ventral-most layer of the fan-shaped body. They respond with increased activity to front-to-back motion of stripes or other types of progressive optic flow while the animal is flying. While it is quiescent, however, the cells are unresponsive to identical stimuli. We used both two-photon calcium imaging and patch-clamp electrophysiology to measure the neuronal responses. Hope you enjoy it! Below are some additional images related to the work:
This is a schematic based on a single dye-filled neuron showing the position of ExFl1 neurons with respect to other central complex neuropils.
This is a view of the CAD model of the 2-photon rig that I used to conduct the imaging experiments The camera (top right) views a mirror that reflects an image of the fly, so that I can monitor its wing stroke amplitudes. The objective views the fly head from the posterior direction. The blue LED array allows me to show arbitrary visual stimuli to the fly. I can also deliver gentle air puffs via the tube below the fly to initiate flight.
I will post some videos on the lab website and vimeo.
http://jn.physiology.org/content/early/2013/10/04/jn.00593.2013.long
and here is the link to it on pubmed:
http://www.ncbi.nlm.nih.gov/pubmed/24108792
In it we report responses in the ExFl1 (aka F1) neurons in the ventral-most layer of the fan-shaped body. They respond with increased activity to front-to-back motion of stripes or other types of progressive optic flow while the animal is flying. While it is quiescent, however, the cells are unresponsive to identical stimuli. We used both two-photon calcium imaging and patch-clamp electrophysiology to measure the neuronal responses. Hope you enjoy it! Below are some additional images related to the work:
This is a schematic based on a single dye-filled neuron showing the position of ExFl1 neurons with respect to other central complex neuropils.
This is a view of the CAD model of the 2-photon rig that I used to conduct the imaging experiments The camera (top right) views a mirror that reflects an image of the fly, so that I can monitor its wing stroke amplitudes. The objective views the fly head from the posterior direction. The blue LED array allows me to show arbitrary visual stimuli to the fly. I can also deliver gentle air puffs via the tube below the fly to initiate flight.
I will post some videos on the lab website and vimeo.
Thursday, September 26, 2013
Testing image registration to correct for brain motion
Today I am going to test out a method for correcting for image motion, based on "Efficient subpixel image registration algorithms," Opt. Lett. 33, 156-158 (2008). There is a MATLAB implementation that seems to be widely used. This has been translated into python here:
http://image-registration.readthedocs.org/en/latest/
github page here:
https://github.com/keflavich/image_registration
from this page I cloned the repository (actually, I just downloaded the zipped directory into my src directory, then installed it using the following commands).
Attempting to run it informed me that I was missing some dependencies, which I downloaded using the following commands (additional information on these is located here http://pytest.org/latest/contents.html and here http://www.stsci.edu/institute/software_hardware/pyfits and here http://stsdas.stsci.edu/astrolib/pywcs/).
Now, I am up and running! I tested image_registration.chi2_shift by modifying one of the examples. Since I think it will be helpful to "stabilize" and image, I want to not only find out the amount one image is shifted relative to another, but be able to actually shift it back into register. This test snippet allowed me to test this:
Looks good!
http://image-registration.readthedocs.org/en/latest/
github page here:
https://github.com/keflavich/image_registration
from this page I cloned the repository (actually, I just downloaded the zipped directory into my src directory, then installed it using the following commands).
cd image_registration/
sudo python setup.py install
Attempting to run it informed me that I was missing some dependencies, which I downloaded using the following commands (additional information on these is located here http://pytest.org/latest/contents.html and here http://www.stsci.edu/institute/software_hardware/pyfits and here http://stsdas.stsci.edu/astrolib/pywcs/).
pip install -U pytest
sudo pip install pyfits
sudo pip install pywcs
Now, I am up and running! I tested image_registration.chi2_shift by modifying one of the examples. Since I think it will be helpful to "stabilize" and image, I want to not only find out the amount one image is shifted relative to another, but be able to actually shift it back into register. This test snippet allowed me to test this:
import image_registration
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
rr = ((np.indices([100,100]) - np.array([50.,50.])[:,None,None])**2).sum(axis=0)**0.5
exampleImage = np.exp(-rr**2/(3.**2*2.)) * 20
fig = plt.figure()
ax1 = fig.add_subplot(221)
ax1.imshow(exampleImage)
ax1.set_title('Original image')
ax2 = fig.add_subplot(222)
shiftedImage = np.roll(np.roll(exampleImage,12,0),5,1) + np.random.randn(100,100)
ax2.imshow(shiftedImage)
ax2.set_title('Noisy shifted image')
dx,dy,edx,edy = image_registration.chi2_shift(exampleImage, shiftedImage, upsample_factor='auto')
shiftedBackImage = image_registration.fft_tools.shift2d(shiftedImage,-dx,-dy)
ax3 = fig.add_subplot(224)
ax3.imshow(shiftedBackImage)
ax3.set_title('Noisy image shifted to match original')
Looks good!
Monday, June 24, 2013
Installing Ubuntu 12.04 on Lenovo ThinkStation E30
OK, this was a little frustrating. I downloaded the .iso and used the startup disk creator to make a usb flashdrive from which to install Ubuntu 12.04 Precise. Trying it out from the USB would work, and the installation would appear to work, but upon restarting I would get Error 1962: no operating system found (this after some weird stuff about trying to connect to MAC address...) Googling around I found other people have come across similar problems: http://askubuntu.com/questions/141879/error-1962-no-operating-system-found-after-installing-12-04-lenovo-thinkcentre
(Just FYI if anyone looks at this, messing with the BIOS: devices, startup options, etc. did absolutely nothing to fix the problem. It apparently has to do with the partitioning.)
Following their lead, I downloaded an older version of Ubuntu (10.04.4) from here: http://old-releases.ubuntu.com/releases/10.04.0/ choosing ubuntu-10.04.4-desktop-amd64.iso because I want to use the 64-bit version. I made a startup disk using this file, installed it on my hard drive, and restarted.
I'm not sure, but at this point I think I needed to remove my nvidea video card and just use the vga connector, but otherwise it worked fine. Then I ran update manager, waited for all updates to complete, then clicked the option to upgrade to 12.04. This took a long time, but in the end it worked. Restarting worked, and then replacing my video card and installing its drivers (using 'additional drivers' gui) got everything up and running! I would recommend this route. It takes time, but not much actual work. Just have a book handy while waiting for things to download and install.
(Just FYI if anyone looks at this, messing with the BIOS: devices, startup options, etc. did absolutely nothing to fix the problem. It apparently has to do with the partitioning.)
Following their lead, I downloaded an older version of Ubuntu (10.04.4) from here: http://old-releases.ubuntu.com/releases/10.04.0/ choosing ubuntu-10.04.4-desktop-amd64.iso because I want to use the 64-bit version. I made a startup disk using this file, installed it on my hard drive, and restarted.
I'm not sure, but at this point I think I needed to remove my nvidea video card and just use the vga connector, but otherwise it worked fine. Then I ran update manager, waited for all updates to complete, then clicked the option to upgrade to 12.04. This took a long time, but in the end it worked. Restarting worked, and then replacing my video card and installing its drivers (using 'additional drivers' gui) got everything up and running! I would recommend this route. It takes time, but not much actual work. Just have a book handy while waiting for things to download and install.
Tuesday, March 19, 2013
Snapshot tool using Adobe Acrobat on a secured file
When attempting to use the snapshot tool (camera icon) to select parts of a .pdf for a journal club paper today, I ran into an error. The highlight square would appear, but the area would not get copied to the clip board, so I could not paste it into my presentation. I had downloaded the paper from the journal's website, and it had some security enabled that resulted in it saying "(SECURED)" in the title bar, next to the .pdf's name:
The problem occurred in both Adobe Acrobat and Reader. To fix it, I clicked on the lock icon in the left menu (Security Settings) and then clicked "Permission Details" (see above screenshot).
This opened the "Document Properties" dialogue box to the "Security" tab. Here I chose "No Security" from the drop down "Security Method" menu (see screenshot below), click OK, and voila--the snapshot tool worked!
The problem occurred in both Adobe Acrobat and Reader. To fix it, I clicked on the lock icon in the left menu (Security Settings) and then clicked "Permission Details" (see above screenshot).
This opened the "Document Properties" dialogue box to the "Security" tab. Here I chose "No Security" from the drop down "Security Method" menu (see screenshot below), click OK, and voila--the snapshot tool worked!
Monday, March 11, 2013
Smooth lines become jagged when saved from matplotlib
I just learned about a very important parameter when saving .pdf or .svg files of figures with many small axes:
import matplotlib.pyplot as plt
plt.rc('path',simplify=False)
or...
plt.rc('path',simplify_threshold=.0001)
Matplotlib (pyplot) tries to intelligently downsample the points that make up lines in order to save space, but when you have very small subplots this results in jagged lines. By either turning off this path.simplify feature, or decreasing the threshold, those lines will be smoother.
import matplotlib.pyplot as plt
plt.rc('path',simplify=False)
or...
plt.rc('path',simplify_threshold=.0001)
Matplotlib (pyplot) tries to intelligently downsample the points that make up lines in order to save space, but when you have very small subplots this results in jagged lines. By either turning off this path.simplify feature, or decreasing the threshold, those lines will be smoother.
Tuesday, February 12, 2013
Make any matplotlib figure look like an XKCD comic!
This cool Python script came in very handy while preparing slides for my last group meeting, thanks, Jake Vanderplas!
http://jakevdp.github.com/blog/2012/10/07/xkcd-style-plots-in-matplotlib/
And thanks, of course, to my favorite web comic, xkcd.com
http://jakevdp.github.com/blog/2012/10/07/xkcd-style-plots-in-matplotlib/
And thanks, of course, to my favorite web comic, xkcd.com
Thursday, January 10, 2013
Setting up Measurement Computer USB-1208FS in MATLAB 2012b
Along with the usual Instacal installation requirement to get the USB-1208FS recognized by the computer, run MATLAB as Administrator (right-click on MATLAB icon, choose 'Run as administrator'), then run the command
at the MATLAB command prompt.
(This is described in the MATLAB solution 1-5SAEJ8.)
daqregister('mcc')
at the MATLAB command prompt.
(This is described in the MATLAB solution 1-5SAEJ8.)
Subscribe to:
Posts (Atom)