![]() but then you'd also need to do the while any(mouse.getPressed()): pass trick to record the onset and not just the release. The latter approach could be extended with reaction times by adding columns called cercle1_rt etc. Then under each frame I would do this: if mouse.isPressedIn(cercle_1): So 3.0 s is a valid interval, as it equals 18 16.6667, but say 3.05 would not be valid. if your display is running at 60 Hz, then any interval must be a multiple of 16.6667 ms. So under begin routine: # Mark all non-target cirlces as unpressed jitter random () (4.0 - 2.5) + 2.5 But note that delay periods need to be some multiple of the screen refresh rate of your screen. Then the column cercle1 would correspond to what you currently call the correct column. It also provides tools for working with audio samples and performing speech-to-text transcription. The other is to have a column for each circle and shift those from "unpressed" to "pressed" when they are clicked. The psychopy.sound module provides an interface for audio playback and recording devices. ![]() Then under end routine add: thisExp.addData('n_wrong', n_wrong) # Now wait until the mouse release to prevent recording 60 wrong clicks per second! A) Task complexity score (max score 7) and B) Mental investment score (max score. Then in each frame add: if mouse.isPressedIn(cercle_1):Įlif mouse.isPressedIn(cercle_2) or mouse.isPressedIn(cercle_3) or mouse.isPressedIn(cercle_4) or mouse.isPressedIn(cercle_5): processing of the sounds under different levels of cognitive load. n_wrong) and count the number of non-circle_1 responses. This can be hard to find an appropriate data structure for, but in your case, I can think of two solutions. It sounds like you want to record a sequence of events in the trial. ![]() So the order of my entire experiment is: Fixation Text File Video Clip.05 ms ISI Sound Clip. My desired ISI in between the video clip and the audio file is. I want to show a video clip and then a sound byte. I’ll explain my project in greater detail first. Right now, I only have how long it took to get the correct answer. Thank you for your response Michael, I wanted to get some clarification. Question : How can I have in my csv file infos about all the times a participant pressed the mouse bouton.Maybe before pressing cercle_1, he pressed the cercle_3. Allows start of playback at a scheduled future system time: E.g., schedule sound onset for a specific time in the future (e.g. PsychPortAudio provides the following features: Allows instant start of sound playback with a very low onset latency compared to other sound drivers (on well working hardware). It is a replacement for all other sound drivers and PTB’s old SND() function. I would have no idea if the participant tried other circle before pressing on circle_1. PsychPortAudio is a special sound driver for PTB-3. The problem is that my datafile only contains the response time (RT) and the info of the circle_1. My code : if mouse.isPressedIn(cercle_1): PsychoPy® provides two virtual voice-keys, one for detecting vocal onsets and one for vocal offsets. I have 5 circles on the screen and the participant needs to pressed on the good circle. Hardware voice-keys are used to detect and signal acoustic properties in real time, e.g., the onset of a spoken word in word-naming studies. I'm using psychopy to build a cognitive task.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |