Wednesday, March 19, 2014

Phototropic automaton

I found the basic idea for this little toy in the now classical book by Herman Schöne "Spatial Orientation: The spatial control of behavior in animals and man".
The logic is simple: you need to have a two-wheels vehicle-like frame with two small solar-panels and two electric motors (one for each wheel), such that the left solar-panel connects to the right motor and vice-versa. This crossed configuration ensures that when the sun illuminates the left side of the toy, the right wheel spins forward rotating the toy leftward (toward the sun), and likewise when the sun illuminates the right side. The result is an automaton which tends to move/orientate towards the sun.
You can see some videos below:





(notice how it "avoids" the shadows while keeping a more or less stable course)

I guess it might be possible to obtain different phototropic motion patterns by changing the position of the solar-panels (thus altering its relation with the direction of the sun).
One interesting aspect of this toy is that it shows how "purposeful"-like behavior might emerge from (very) simple circuits that, nonetheless, establish a link between efferent (sensor) and afferent (motor) components in such a way as to be tuned to a particular environment. Readers of J. J. Gibson might find this toy particularly appealing as it illustrates how an orientation behavior can emerge from the close coupling between the sensing/moving "being" and ecological variables.

Sunday, February 9, 2014

Quick tip: How to set the mouse position in PsychoPy

PsychoPy is an open-source application, based on Python, for building psychology experiments. During the last couple of years I have grown increasingly fond of this software: it is magnificently designed, to say the least (kudos to Jonathan Peirce!). One of the main advantages of PsychoPy, from where I stand, is it's flexibility: It is very easy to use if you have no programming skills but, if you so wish, easy to customize the code for your own purposes (I know that some commercially available programs allow the inclusion of scripts, but honestly I reckon that falls short of what you can potentially do with PsychoPy). I might add that much of the knowledge I now have on Python was acquired by carefully studying the code of the first few experiments I built with PsychoPy - eventually, as new needs emerged, I found myself progressively being able to comfortably include my own lines of code for the purposes at hand.
I will not comment extensively here about PsychoPy - it suffices to say that I love it! Besides, there are some wonderful reviews of PsychoPy elsewhere (specially, here) that are way better than anything I could write.
The purpose of this post is just to share a little trick I recently found - how to set the position of the mouse cursor in PsychoPy. I know that if you tell PsychoPy to use the Pygame modules, setting the mouse cursor position is a no-brainer with the command "setPos". However, if you want to use Pyglet instead (maybe you want to use video clips as stimuli, which is not possible with Pygame), you will have no simple solution. To provide a specific scenario, I have been studying representational momentum and representational gravity - briefly, when you present a moving target that suddenly disappears midway in its trajectory to a participant who is instructed to remember and locate that vanishing position, you will find a systematic proneness to indicate a location displaced forward (representational momentum) and downward (representational gravity) in relation to the objective disappearance position. A typical trial in an experiment on these phenomena starts by presenting an animation which depicts a moving object (say, a circle or a square); when the animation ends a cursor, controllable with the mouse, appears on the screen for the participant to provide his/her spatial judgement (where did the target disappeared?). Usually, you would want to hide the cursor during the stimulus presentation and, furthermore, you would want it to appear at the center of the screen - otherwise, the cursor would appear at about the location indicated by the participant in the previous trial, introducing a possible confounding.
The solution I found to solve this issue in PsychoPy goes as follows: First, you can build your experiment as you would normally in PsychoPy (e.g., in the Builder view) and then compile the script. Second, you have to import "win32api" (you can also use "ctypes", but I will not cover that solution here): Lines 10-16 of your code will probably read something as:

from __future__ import division # so that 1/3=0.333 instead of 1/3=0
from psychopy import visual, core, data, event, logging, sound, gui
from psychopy.constants import * # things like STARTED, FINISHED
import numpy as np # whole numpy lib is available, prepend 'np.'
from numpy import sin, cos, tan, log, log10, pi, average, sqrt, std, deg2rad, rad2deg, linspace, asarray
from numpy.random import random, randint, normal, shuffle
import os # handy system and path functions



Just add the following to line 17:

from win32api import SetCursorPos

Next, you will have to find the place in the script where you want to set the cursor position. Most probably you will want the cursor to appear in a custom position right at the beginning of a mouse "response" event. If that event was named, for instance, "Response", you will find a helpful comment in the code reading something as:

#-------Start Routine "Response"-------

Followed immediately by:

continueRoutine = True

Write the following line just below:

SetCursorPos((X, Y))

where X and Y are the coordinates on screen where you want the cursor to appear. Notice that, contrary to the mouse routines used by PsychoPy, the (0, 0) location with win32api is NOT at the center, but at the upper-left corner - that is, if you want to set the cursor to appear at the center you would have to halve the horizontal and vertical coordinates of your screen. To provide an example, lets say you have a 1024*748 screen - the center in win32api coordinates would be 512 and 384. Your code would thus read:

SetCursorPor((512, 384))

If you want, you can even use the numpy.random functions (loaded by default by PsychoPy) to define a random position for the cursor at each trial. 
And that's it! Reasonably easy and it works, at least for me. One final word: I have Python installed besides PsychoPy (standalone version); I have no idea if this would work had I only installed PsychoPy. But, if not, you have a quick fix - just install Python.

Friday, February 7, 2014

Measuring reaction times in fencing II - The Hick-Hyman Law

In the previous post I presented my home-made solution to measure reaction times in fencing movements - basically it is a wood board with three buttons on it located at the left side, right side and bottom. It connects to a computer via a USB cable. The computer emits a beep to the left, right or both ears through headphones and, upon hearing it, the fencer must strike the corresponding target area as fast as possible.
I will now show some data collected with that device and use it to explain a little bit about one of the most successful laws of psychology - the Hicks-Hyman law.
I'll start by explaining with some detail how the data was gathered - it resembles more a psychological experiment than a sports training session. I programmed three different tasks - simple reaction time task (simple RT), two and three alternatives forced-choice tasks (respectively 2AFC and 3AFC). In the simple RT task, there was only one beep and one target area to be hit; in the 2AFC there were two beeps and two target areas to consider; finally, in the 3AFC there were three different beeps that signaled that one of the three target areas should be hit. For each task there were 10 repetitions of each condition and each task was performed twice (in a random order). Now, I am far from being an experienced fencer and, therefore, there was an unsettling number of times that I missed the target altogether - for the present purposes that is fine, but I definitely should practice more often.
As I previously stated, reaction times (RT) are one of the oldest objective measurements used to infer how animal beings perceive the world. In 1952, William Edmund Hick noticed that RTs increased with the number of response alternatives - that is, when a person must press, say, a button in response to a certain stimulus, she/he takes less time than if she/he had to press one of two buttons in response to one of two stimuli; with three buttons mapped to three stimuli, the time it takes for a response is still higher, and so forth. Importantly, the relation between the number of alternatives and the RT was not linear. Ray Hyman, in 1953, made a significant breakthrough by showing that if instead of considering the number of alternatives one refers to the amount of information (in bits) involved in the task, the relationship with RTs is linear. Said another way, the RTs are linearly related to the amount of information to be processed in a certain task. In order to better understand what this means, it is useful to conceive, for a moment, the human involved is this sort of tasks as a channel of information: something goes in as input (the stimulus), it is processed in a certain fashion, and an output comes out in the form of a response. Imagine that, in the fencing task described above, you were watching me while I was performing the task - you could not hear what sound I heard at any moment because I would be the one wearing the headphones; however, by observing my responses you could infer what sound was transmitted by the computer - if I hit the right-side target area, you would assume that I must have heard a sound in my right ear. In this sense I would be transmitting some information - I would be akin to a channel of information; and just like any ordinary communication channel, there might be lost information along the way, noise and so forth (I might mistakenly hit the left target when hearing a sound delivered to my right ear; or I might fail to hit a target or even get distracted and not make a movement at all). If this sounds to much like engineering that is normal - that's where most of these ideas were borrowed from anyway.
Now if we focus on the successful trials (that is, the times when I hit the correct target), the Hicks-Hyman law states that RTs should be a linear function of the information present in the task. Without going into too much detail, information can be measured in bits (taking advantage of the Shannon-Weaver theorem). The Hicks-Hyman law thus formally reads as follows:


The a and b values would be the parameters of the function, with a standing for an estimation of the movement time (the time it takes me to move my arm, discounting any perceptual and processing times). The b parameter, the slope of the line, translates the processing speed - the velocity at which the observer processes the information - when the observer is conceived as an information channel, parameter b is akin to the bandwidth. Finally, the 1 added to n (number of alternatives) aims to account for uncertainty in the response.
The next image plots my average response times against the bits to be processed in the simple RT task (1 bit), the 2AFC task (1.58 bits) and the 3AFC task (2 bits). You will see two lines - the full line is labeled as "lunge" while the dashed line refers to "simple extension".



Lets focus first on the "simple extension" data points. By simple extension I mean a simple movement where, from an appropriate distance, I extend my arm with the epée in order to hit the correct target. In fencing this kind of attack is seldom made on its own, being usually preceded by a parry. I included this attack here because, due to its simplicity in perceptuo-motor terms, it makes a good baseline condition. You can see in the graph that, in consonance with the Hicks-Hyman law, my RTs are fairly linear when plotted against information. You can see that an estimate of my movement times (stretching the arm) is about 0.47 seconds and I processed the information (decided which target I was supposed to hit) at a rate of about 6.43 bits/s (bits per second). This value is fairly within the ranges usually found with human participants. Now the full line refers to "lunges" - that is, the basic attack in fencing where, from an appropriate distance (larger than for the simple extension), the fencer pushes his whole body forward with the back leg while thrusting the weapon toward the opponent. In the graph you can see that the lunge times are considerably higher than for the simple extension (nothing strange about that); the movement time is estimated to be about 0.92 seconds. Importantly, you can notice that the line that connects the data points is much flatter for the lunge. This translates as an information transmission rate of about 96.95 bits/s, which is considerably high - so high, in fact, that it raises some suspicion. Actually, there is nothing strange or super-human about this - the interesting and most important point about the Hicks-Hyman law is that it assumes that information is processed serially - information goes in, THEN information is processed and THEN information comes out. The 86.95 bits/s refers to the estimated bandwidth IF a serial information channel had produced the results. The clue to understand what is going on lies in the overall higher RTs found when performing a lunge - I take more time to lunge than to simply extend my arm; however, while I am lunging, that extra time I take can be used to decide where to deliver the strike - that is to say, as soon as I ear the sound I probably start the lunge movement immediately and while I am doing so I perform adjustments in the orientation of the epée in order to hit the correct target - I probably aim to the target not before starting the movement but while I am doing it. In psychology, we would say that the information is being processed in parallel - I do not need to wait to process the information present in the sound before lunging. This would be evidence that a lunge is not a so-called ballistic motion: it is not as if I "shoot" the epée with no control over its following trajectory, but rather several things happening at once: start to push the body forward with the back leg, decide where to hit, adjust the epée orientation, stretch the arm, move forward, adjust the orientation of the epée, etc.
As final remark, I am far from being an experienced fencer; also, these data was collected in a fairly relaxed way just for illustration purposes. I have no idea if the same conclusions would be reached if a more systematic research was done. But, hopefully, if you read this far, you might now understand a little better how can we, in cognitive sciences, make some inferences about how we humans perceive the world around us.

Wednesday, February 5, 2014

Measuring reaction times in fencing - a home made solution

A few months ago I started to practice fencing. As a fast paced sport, which requires quick skills, it struck me as an obvious setting for measuring reaction times. Gauging reaction times is probably one the oldest routes to map perceptual skills and it is deeply routed in the very history of Experimental Psychology.
I thus set forth to build some kind of device with which I could assess the speed of my lunges. In most of the experiments I design and conduct, response times are routinely measured, even if the data is not used. In fact, most, if not all, the software available to build experiments measures response times by default, at least when the participants respond with the keyboard or mouse. However, when response times are the main data to be collected, a response box is to be preferred. The reason is that most USB peripheral devices (like keyboards and mice) are not accurate to the millisecond - without being exhaustive, most often the recorded times are biased by dozens if not hundreds of milliseconds (see, e.g., here).
Still, I was looking for something affordable and easy to make, at least as some kind of prototype; for my purposes, it is ok if my time readings are not pristine. I thus decided to use an old (and cheap) USB mouse that was laying around. My idea was to use the circuit board of the mouse and to solder to it some kind of buttons big enough to be pressed with an epée (my weapon of choice). I bought some cheap light switches which I could turn into press buttons by adding a strong enough spring (looking back, it might have not been the best solution, but it works fine). Here you can see all those elements ready to be glued to a wood board:


And here with some padding (to increase the effective area in which the buttons could be pressed, squares of hard paperboard were glued to the light switches):


Finally, the entire contraption was wrapped in fabric and the target areas painted. Below you can see a short clip that shows the final apparatus connected to a laptop which, in turn, is feeding the screen to a large TV (just so that it can be seen in the video). A python program (made with PsycoPy) is set to produce a "beep" sound either to the left, right or both ears at unexpected intervals, so as to signal that a lunge should be made to the left, right or bottom target area, respectively. The time lapsed between the sound and a successful strike is then shown on the screen in milliseconds.


There are several things that could be improved: for one, the bottom target area could be in a better position (it was a late addition - I figured that as the mouse I used had three buttons, I might as well have a third target area). But, overall, I am happy with the end result - it is cheap and good enough. In a future post I will show some data.

Tuesday, February 4, 2014

Kymograph, reloaded

It has been a while since I even thought about writing a blog; in fact, this would be the second version of the "Kymograph" blog. The first one was a small project I devoted some time to during my PhD - it was a Portuguese written blog which, basically, was little more than a repository for some theoretical notes on psychophysical methods. Looking back, I have to recognize that at the time I made the number one mistake in a blog: do not start one if you have nothing worth saying. I cannot guarantee that this time would be different, but at least I now have some clearer ideas about what this blog should be about.
So, as customary in a first post, I will start by presenting myself - I am currently a post-doc researcher in the University of Coimbra with a project on how (the sensation of) gravity shapes and constrains the visual perception of space. Much of what I will write about in this blog steam from some challenges and issues that I came across during this scientific endeavor. I can say that the last couple of years have been, for me as a researcher, a time of increased creativity and scientific flourishing - hopefully, that will provide me enough opportunities to share here interesting stuff and above all (finger's crossed) something worth sharing! :)