Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

📚 The CoCalc Library - books, templates and other resources

132939 views
License: OTHER
Kernel: Python 3
As a reminder, one of the prerequisites for this course if programming experience, especially in Python. If you do not have experience in Python specifically, we strongly recommend you go through the Codecademy Python course as soon as possible to brush up on the basics of Python.
Before going through this notebook, you may want to take a quick look at [7 - Debugging.ipynb](7 - Debugging.ipynb) if you haven't already for some tips on debugging your code when you get stuck.

Sometimes, there are more advanced operations we want to do with NumPy arrays. For example, if we had an array of values and wanted to set all negative values to zero, how would we do this? The answer is called fancy indexing, and be done two ways: boolean indexing, and array indexing.

import numpy as np

Boolean indexing

The idea behind boolean indexing is that for each element of the array, we know whether we want to select it or not. A boolean array is an array of the same shape as our original array which contains only True and False values. The location of the True values in our boolean array indicate the location of the element in our original array that we want to select, while the location of the False values correspond to those elements in our original array that we don't want to select.

Let's consider our experiment data again:

data = np.load("data/experiment_data.npy") data

Recall that these are reaction times. It is typically accepted that really low reaction times -- such as less than 100 milliseconds -- are too fast for people to have actually seen and processed the stimulus. Let's see if there are any reaction times less than 100 milliseconds in our data.

To pull out just the elements less than 100 milliseconds, we need two steps. First, we use boolean comparisons to check which are less than 100ms:

too_fast = data < 100 too_fast

Then, using this too_fast array, we can index back into the original array, and see that there are indeed some trials which were abnormally fast:

data[too_fast]

What this is doing is essentially saying: for every element in too_fast that is True, give me the corresponding element in arr.

Bcause this is a boolean array, we can also negate it, and pull out all the elements that we consider to be valid reaction times:

data[~too_fast]

Not only does this give you the elements, but modifying those elements will modify the original array, too. In this case, we will set our "too fast" elements to have a value of "not a number", or NaN:

data[too_fast] = np.nan data

Now, if we try to find which elements are less than 100 milliseconds, we will not find any:

data[data < 100]
Note: You may see a RuntimeWarning when you run the above cell, saying that an "invalid value" was encountered. Sometimes, it is possible for NaNs to appear in an array without your knowledge: for example, if you multiply infinity (np.inf) by zero. So, NumPy is warning us that it has encountered NaNs (the "invalid value") in case we weren't aware. We knew there were NaNs because we put them there, so in this scenario we can safely ignore the warning. However, if you encounter a warning like this in the future and you weren't expecting it, make sure you investigate the source of the warning!

Exercise: Threshold (2 points)

Write a function, threshold, which takes an array and returns a new array with values thresholded by the mean of the array.
def threshold(arr): """Computes the mean of the given array, and returns a new array which is 1 where values in the original array are greater than the mean, 0 where they are equal to the mean, and -1 where they are less than the mean. Remember that if you want to create a copy of an array, you need to use `arr.copy()`. Hint: your solution should use boolean indexing, and can be done in six lines of code (including the return statement). Parameters ---------- arr : numpy.ndarray Returns ------- new_arr : thresholded version of `arr` """ ### BEGIN SOLUTION array_mean = np.mean(arr) new_arr = arr.copy() new_arr[arr > array_mean] = 1 new_arr[arr == array_mean] = 0 new_arr[arr < array_mean] = -1 return new_arr ### END SOLUTION
# add your own test cases in this cell!
"""Try a few obvious threshold cases.""" from numpy.testing import assert_array_equal assert_array_equal(threshold(np.array([1, 1, 1, 1])), np.array([0, 0, 0, 0])) assert_array_equal(threshold(np.array([1, 0, 1, 0])), np.array([1, -1, 1, -1])) assert_array_equal(threshold(np.array([1, 0.5, 0, 0.5])), np.array([1, 0, -1, 0])) assert_array_equal( threshold(np.array([[0.5, 0.2, -0.3, 0.1], [1.7, -3.8, 0.5, 0.6]])), np.array([[1, 1, -1, 1], [1, -1, 1, 1]])) print("Success!")
"""Make sure a copy of the array is being returned, and that the original array is unmodified.""" x = np.array([[0.5, 0.2, -0.3, 0.1], [1.7, -3.8, 0.5, 0.6]]) y = threshold(x) assert_array_equal(x, np.array([[0.5, 0.2, -0.3, 0.1], [1.7, -3.8, 0.5, 0.6]])) assert_array_equal(y, np.array([[1, 1, -1, 1], [1, -1, 1, 1]])) print("Success!")

Array indexing

The other type of fancy indexing is array indexing. Let's consider our average response across participants:

data = np.load("data/experiment_data.npy") avg_responses = np.mean(data, axis=1) avg_responses

And let's say we also know which element corresponds to which participant, through the following participants array:

participants = np.load("data/experiment_participants.npy") participants

In other words, the first element of avg_responses corresponds to the first element of participants (so participant 45), the second element of avg_responses was given by participant 39, and so on.

Let's say we wanted to know what participants had the largest average response, and what participants had the smallest average response. To do this, we might try sorting the responses:

np.sort(avg_responses)

However, we then don't know which responses correspond to which trials. A different way to do this would be to use np.argsort, which returns an array of indices corresponding to the sorted order of the elements, rather than the elements in sorted order:

np.argsort(avg_responses)

What this says is that element 18 is the smallest response, element 42 is the next smallest response, and so on, all the way to element 24, which is the largest response:

avg_responses[18]
avg_responses[42]
avg_responses[24]

To use fancy indexing, we can actually use this array of integers as an index. If we use it on the original array, then we will obtain the sorted elements:

avg_responses[np.argsort(avg_responses)]

And if we use it on our array of participants, then we can determine what participants had the largest and smallest responses:

participants[np.argsort(avg_responses)]

So, in this case, participant 10 had the smallest average response, while participant 47 had the largest average response.

From boolean to integer indices

Sometimes, we want to use a combination of boolean and array indexing. For example, if we wanted to pull out just the responses for participant 2, a natural approach would be to use boolean indexing:

participant_2_responses = data[participants == 'p_002']

Another way that we could do this would be to determine the index of participant 2, and then use that to index into data. To do this, we can use a function called np.argwhere, which returns the indices of elements that are true:

np.argwhere(participants == 'p_002')

So in this case, we see that participant 2 corresponds to index 26.

Exercise: Averaging responses (2 points)

Write a function that takes as arguments a participant id, the data, and the list of participant names, and computes the average response for the given participant.
Occasionally we will ask you to raise an error if your function gets inputs that it's not expecting. As a reminder, to raise an error, you should use the raise keyword. For example, to raise a ValueError, you would do raise ValueError(message), where message is a string explaining specifically what the error was.
def participant_mean(participant, data, participants): """Computes the mean response for the given participant. A ValueError should be raised if more than one participant has the given name. Hint: your solution should use `np.argwhere`, and can be done in four lines (including the return statement). Parameters ---------- participant: string The name/id of the participant data: numpy.ndarray with shape (n, m) Rows correspond to participants, columns to trials participants: numpy.ndarray with shape(n,) A string array containing participant names/ids, corresponding to the rows of the `data` array. Returns ------- float: the mean response of the participant over all trials """ ### BEGIN SOLUTION i = np.argwhere(participants == participant) if i.size > 1: raise ValueError("more than one participant with id: " + participant) return np.mean(data[i]) ### END SOLUTION
# add your own test cases in this cell!
"""Check for correct answers with the example experiment data.""" from numpy.testing import assert_allclose data = np.load("data/experiment_data.npy") participants = np.load("data/experiment_participants.npy") assert_allclose(participant_mean('p_002', data, participants), 1857.7013113499095) assert_allclose(participant_mean('p_047', data, participants), 1906.0651466520821) assert_allclose(participant_mean('p_013', data, participants), 1718.4379910225193) print("Success!")
"""Check for correct answers for some different data.""" data = np.arange(32).reshape((4, 8)) participants = np.array(['a', 'b', 'c', 'd']) assert_allclose(participant_mean('a', data, participants), 3.5) assert_allclose(participant_mean('b', data, participants), 11.5) assert_allclose(participant_mean('c', data, participants), 19.5) assert_allclose(participant_mean('d', data, participants), 27.5) print("Success!")
"""Check that a ValueError is raised when the participant name is not unique.""" from nose.tools import assert_raises data = np.arange(32).reshape((4, 8)) participants = np.array(['a', 'b', 'c', 'a']) assert_raises(ValueError, participant_mean, 'a', data, participants) print("Success!")