diff --git "a/pmpp_qa.jsonl" "b/pmpp_qa.jsonl" new file mode 100644--- /dev/null +++ "b/pmpp_qa.jsonl" @@ -0,0 +1,146 @@ +{"chapter": 2, "exercise": "1", "type": "mcq", "question": "If we want to use each thread in a grid to calculate one output element of a vector addition, what is the expression for mapping the thread/block indices to the data index i?", "choices": ["A. i = threadIdx.x + threadIdx.y;", "B. i = blockIdx.x + threadIdx.x;", "C. i = blockIdx.x * blockDim.x + threadIdx.x;", "D. i = blockIdx.x * threadIdx.x;"], "answer": "C", "explanation": "You need both the block offset (blockIdx.x * blockDim.x) and the thread offset within the block (threadIdx.x).", "topic_tags": ["CUDA", "indexing", "grid", "blockDim"]} +{"chapter": 2, "exercise": "2", "type": "mcq", "question": "Each thread calculates two adjacent elements of a vector addition. What is the expression for the data index i of the first element processed by a thread?", "choices": ["A. i = blockIdx.x * blockDim.x + threadIdx.x * 2;", "B. i = blockIdx.x * threadIdx.x * 2;", "C. i = (blockIdx.x * blockDim.x + threadIdx.x) * 2;", "D. i = blockIdx.x * blockDim.x * 2 + threadIdx.x;"], "answer": "C", "explanation": "This doubles the logical thread index so each thread starts at an even index (0,2,4,...) while remaining contiguous across blocks.", "topic_tags": ["CUDA", "indexing", "coarsening"]} +{"chapter": 2, "exercise": "3", "type": "mcq", "question": "Each thread calculates two elements. A block processes 2*blockDim.x consecutive elements in two sections: first section (each thread does one element), then second section (each thread does one element). What is the expression for the first element index i for a thread?", "choices": ["A. i = blockIdx.x * blockDim.x + threadIdx.x + 2;", "B. i = blockIdx.x * threadIdx.x * 2;", "C. i = (blockIdx.x * blockDim.x + threadIdx.x) * 2;", "D. i = blockIdx.x * blockDim.x * 2 + threadIdx.x;"], "answer": "D", "explanation": "The first section starts at the block's base offset of 2*blockDim.x. Each thread handles i and then i + blockDim.x in the second section.", "topic_tags": ["CUDA", "indexing", "grid"]} +{"chapter": 2, "exercise": "4", "type": "mcq", "question": "Vector addition with length 8000, 1 output element per thread, block size 1024. Using the minimum number of blocks to cover all elements, how many threads are in the grid?", "choices": ["A. 8000", "B. 8196", "C. 8192", "D. 8200"], "answer": "C", "explanation": "ceil(8000/1024) = 8 blocks, each with 1024 threads -> 8*1024 = 8192 threads.", "topic_tags": ["CUDA", "launch_config"]} +{"chapter": 2, "exercise": "5", "type": "mcq", "question": "Allocate an array of v integers in device global memory with cudaMalloc. What is the correct expression for the second argument (size in bytes)?", "choices": ["A. n", "B. v", "C. n * sizeof(int)", "D. v * sizeof(int)"], "answer": "D", "explanation": "cudaMalloc takes the size in bytes; for v integers that is v * sizeof(int).", "topic_tags": ["CUDA", "cudaMalloc", "API"]} +{"chapter": 2, "exercise": "6", "type": "mcq", "question": "Allocate an array of n floats and have pointer A_d point to it. What is the appropriate first argument to cudaMalloc?", "choices": ["A. n", "B. (void*) A_d", "C. *A_d", "D. (void**) &A_d"], "answer": "D", "explanation": "cudaMalloc's first parameter is a void** to receive the device pointer (i.e., the address of the pointer).", "topic_tags": ["CUDA", "cudaMalloc", "API"]} +{"chapter": 2, "exercise": "7", "type": "mcq", "question": "Copy 3000 bytes from host array A_h to device array A_d. Which API call is correct?", "choices": ["A. cudaMemcpy(3000, A_h, A_d, cudaMemcpyHostToDevice);", "B. cudaMemcpy(A_h, A_d, 3000, cudaMemcpyDeviceToHost);", "C. cudaMemcpy(A_d, A_h, 3000, cudaMemcpyHostToDevice);", "D. cudaMemcpy(3000, A_d, A_h, cudaMemcpyHostToDevice);"], "answer": "C", "explanation": "Syntax is cudaMemcpy(dst, src, sizeBytes, kind). Here we copy from host to device.", "topic_tags": ["CUDA", "cudaMemcpy", "API"]} +{"chapter": 2, "exercise": "8", "type": "mcq", "question": "How to declare variable err to receive return values of CUDA API calls?", "choices": ["A. int err;", "B. cudaError err;", "C. cudaError_t err;", "D. cudaSuccess_t err;"], "answer": "C", "explanation": "CUDA API error return type is cudaError_t.", "topic_tags": ["CUDA", "error_handling", "API"]} +{"chapter": 2, "exercise": "9a", "type": "short_answer", "question": "Given the CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int N) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n if (i < N) {\n b[i] = 2.7f * a[i] - 4.3f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 200000;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);\n}\n```\n\n(a) What is the number of threads **per block**?", "answer": "128", "explanation": "Given by the kernel launch <<<..., 128>>>.", "topic_tags": ["CUDA", "launch_config"]} +{"chapter": 2, "exercise": "9b", "type": "short_answer", "question": "Given the CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int N) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n if (i < N) {\n b[i] = 2.7f * a[i] - 4.3f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 200000;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);\n}\n```\n\n(b) What is the **number of threads in the grid**?", "answer": "200064", "explanation": "Blocks = ceil(200000/128) = (200000 + 127) // 128 = 1563; threads = 1563 * 128 = 200064.", "topic_tags": ["CUDA", "launch_config", "arithmetic"]} +{"chapter": 2, "exercise": "9c", "type": "short_answer", "question": "Given the CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int N) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n if (i < N) {\n b[i] = 2.7f * a[i] - 4.3f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 200000;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);\n}\n```\n\n(c) What is the **number of blocks in the grid**?", "answer": "1563", "explanation": "Computed as (N + 128 - 1) / 128 with N = 200000.", "topic_tags": ["CUDA", "launch_config"]} +{"chapter": 2, "exercise": "9d", "type": "short_answer", "question": "Given the CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int N) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n if (i < N) {\n b[i] = 2.7f * a[i] - 4.3f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 200000;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);\n}\n```\n\n(d) How many threads **execute the index computation line** `unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;`?", "answer": "200064", "explanation": "All launched threads execute the index computation line.", "topic_tags": ["CUDA", "control_flow"]} +{"chapter": 2, "exercise": "9e", "type": "short_answer", "question": "Given the CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int N) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n if (i < N) {\n b[i] = 2.7f * a[i] - 4.3f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 200000;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d, N);\n}\n```\n\n(e) How many threads **execute the assignment inside the `if (i < N)`** - i.e., `b[i] = 2.7f * a[i] - 4.3f;`?", "answer": "200000", "explanation": "Only threads with i < N execute the body; extra 64 threads fail the predicate.", "topic_tags": ["CUDA", "control_flow", "bounds_check"]} +{"chapter": 3, "exercise": "3a", "type": "short_answer", "question": "Given the following CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {\n unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;\n unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;\n if (row < M && col < N) {\n b[row*N + col] = a[row*N + col]/2.1f + 4.8f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int M = 150;\n unsigned int N = 300;\n dim3 bd(16, 32);\n dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);\n foo_kernel<<>>(a_d, b_d, M, N);\n}\n```\n\n(a) What is the number of threads per block?", "answer": "512", "explanation": "bd = (16,32) \u21d2 threadsPerBlock = 16x32 = 512.", "topic_tags": ["CUDA", "launch_config", "threads_per_block"]} +{"chapter": 3, "exercise": "3b", "type": "short_answer", "question": "Given the following CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {\n unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;\n unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;\n if (row < M && col < N) {\n b[row*N + col] = a[row*N + col]/2.1f + 4.8f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int M = 150;\n unsigned int N = 300;\n dim3 bd(16, 32);\n dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);\n foo_kernel<<>>(a_d, b_d, M, N);\n}\n```\n\n(b) What is the number of threads in the grid?", "answer": "48640", "explanation": "gd = (19,5) \u21d2 blocks = 19x5 = 95. Threads = 95x512 = 48,640.", "topic_tags": ["CUDA", "launch_config", "thread_count", "2D_grid"]} +{"chapter": 3, "exercise": "3c", "type": "short_answer", "question": "Given the following CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {\n unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;\n unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;\n if (row < M && col < N) {\n b[row*N + col] = a[row*N + col]/2.1f + 4.8f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int M = 150;\n unsigned int N = 300;\n dim3 bd(16, 32);\n dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);\n foo_kernel<<>>(a_d, b_d, M, N);\n}\n```\n\n(c) What is the number of blocks in the grid?", "answer": "95", "explanation": "Blocks = gd.x x gd.y = 19 x 5 = 95.", "topic_tags": ["CUDA", "grid_dim", "launch_config"]} +{"chapter": 3, "exercise": "3d", "type": "short_answer", "question": "Given the following CUDA code:\n\n```c\n__global__ void foo_kernel(float* a, float* b, unsigned int M, unsigned int N) {\n unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;\n unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;\n if (row < M && col < N) {\n b[row*N + col] = a[row*N + col]/2.1f + 4.8f;\n }\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int M = 150;\n unsigned int N = 300;\n dim3 bd(16, 32);\n dim3 gd((N - 1) / 16 + 1, (M - 1) / 32 + 1);\n foo_kernel<<>>(a_d, b_d, M, N);\n}\n```\n\n(d) How many threads execute the assignment `b[row*N + col] = a[row*N + col]/2.1f + 4.8f;`?", "answer": "45000", "explanation": "Only threads with (row < M && col < N) execute it. Count = MxN = 150x300 = 45,000.", "topic_tags": ["CUDA", "control_flow", "bounds_check"]} +{"chapter": 3, "exercise": "4a", "type": "short_answer", "question": "A 2D matrix has width=400 and height=500 and is stored as a 1D array in row-major order. What is the linear index of the element at row=20, col=10?", "answer": "8010", "explanation": "Row-major index = row*width + col = 20*400 + 10 = 8,010.", "topic_tags": ["CUDA", "indexing", "row_major", "linearization"]} +{"chapter": 3, "exercise": "4b", "type": "short_answer", "question": "A 2D matrix has width=400 and height=500 and is stored as a 1D array in column-major order. What is the linear index of the element at row=20, col=10?", "answer": "5020", "explanation": "Column-major index = col*height + row = 10*500 + 20 = 5,020.", "topic_tags": ["CUDA", "indexing", "column_major", "linearization"]} +{"chapter": 3, "exercise": "5", "type": "short_answer", "question": "A 3D tensor has width=400 (x), height=500 (y), and depth=300 (z). It is stored as a 1D array in row-major order with index mapping idx = z*height*width + y*width + x. What is the linear index of the element at x=10, y=20, z=5?", "answer": "1008010", "explanation": "idx = 5*500*400 + 20*400 + 10 = 1,000,000 + 8,000 + 10 = 1,008,010.", "topic_tags": ["CUDA", "indexing", "3D", "row_major", "linearization"]} +{"chapter": 4, "exercise": "1a", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\n(a) How many warps are there per block?", "answer": "4", "explanation": "Each block has 128 threads and a warp has 32 threads -> 128/32 = 4 warps per block.", "topic_tags": ["CUDA", "warps", "launch_config"]} +{"chapter": 4, "exercise": "1b", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\n(b) How many warps are there in the entire grid?", "answer": "32", "explanation": "Blocks = (1024 + 128 - 1)/128 = 8. Warps per block = 4. Total warps = 8 x 4 = 32.", "topic_tags": ["CUDA", "warps", "launch_config"]} +{"chapter": 4, "exercise": "1c-i", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the statement `if (threadIdx.x < 40 || threadIdx.x >= 104) { ... }`:\n(i) How many warps in the grid are active on this statement?", "answer": "24", "explanation": "Per block: warp 0 (0-31) active; warp 1 (32-63) partially active -> warp active; warp 2 (64-95) inactive; warp 3 (96-127) partially active -> warp active. So 3 active warps/block x 8 blocks = 24.", "topic_tags": ["CUDA", "control_flow", "divergence"]} +{"chapter": 4, "exercise": "1c-ii", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the statement `if (threadIdx.x < 40 || threadIdx.x >= 104) { ... }`:\n(ii) How many warps in the grid are divergent on this statement?", "answer": "16", "explanation": "Per block, warp 1 (32-63) and warp 3 (96-127) have mixed predicates (some threads true, some false) -> 2 divergent warps/block x 8 blocks = 16.", "topic_tags": ["CUDA", "divergence", "warps"]} +{"chapter": 4, "exercise": "1c-iii", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the same statement, what is the SIMD efficiency of warp 0 of block 0? Give a decimal in [0,1] with two decimals; do not include %.", "answer": "1.00", "explanation": "Warp 0 covers threads 0-31; all satisfy `threadIdx.x < 40`. Active lanes = 32/32 = 100%.", "topic_tags": ["CUDA", "SIMD_efficiency", "warps"]} +{"chapter": 4, "exercise": "1c-iv", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the same statement, what is the SIMD efficiency of warp 1 of block 0? Give a decimal in [0,1] with two decimals; do not include %.", "answer": "0.25", "explanation": "Warp 1 covers 32-63; only 32-39 (8 lanes) satisfy the predicate. Efficiency = 8/32 = 25%.", "topic_tags": ["CUDA", "SIMD_efficiency", "divergence"]} +{"chapter": 4, "exercise": "1c-v", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the same statement, what is the SIMD efficiency of warp 3 of block 0? Give a decimal in [0,1] with two decimals; do not include %.", "answer": "0.75", "explanation": "Warp 3 covers 96-127; only 104-127 (24 lanes) satisfy the predicate. Efficiency = 24/32 = 75%.", "topic_tags": ["CUDA", "SIMD_efficiency", "divergence"]} +{"chapter": 4, "exercise": "1d-i", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the statement `if (i % 2 == 0) { ... }`:\n(i) How many warps in the grid are active on this statement?", "answer": "32", "explanation": "All warps reach the statement; within each warp, half the threads satisfy `i % 2 == 0`, but the warp itself is active. Total warps = 32.", "topic_tags": ["CUDA", "control_flow", "warps"]} +{"chapter": 4, "exercise": "1d-ii", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the statement `if (i % 2 == 0) { ... }`:\n(ii) How many warps in the grid are divergent on this statement?", "answer": "32", "explanation": "Within every warp, half the lanes are even and half odd, so every warp diverges on this predicate.", "topic_tags": ["CUDA", "divergence", "warps"]} +{"chapter": 4, "exercise": "1d-iii", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the statement `if (i % 2 == 0) { ... }`:\n(iii) What is the SIMD efficiency of warp 0 of block 0? Give a decimal between 0 and 1 with two decimals; do not include a % sign.", "answer": "0.50", "explanation": "Exactly half the lanes (even indices) are active: 16/32 = 50%.", "topic_tags": ["CUDA", "SIMD_efficiency", "divergence"]} +{"chapter": 4, "exercise": "1e-i", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nConsider the loop `for (unsigned int j = 0; j < 5 - (i % 3); ++j) { ... }`.\n(i) How many loop iterations (values of j) execute with no divergence across the entire grid?", "answer": "3", "explanation": "Threads with i%3 in {0,1,2} have bounds 5,4,3 respectively. For j = 0,1,2 all threads execute; for j = 3 and 4 some threads do not, causing divergence. Hence 3 non-divergent iterations.", "topic_tags": ["CUDA", "divergence", "control_flow"]} +{"chapter": 4, "exercise": "1e-ii", "type": "short_answer", "question": "Consider the following CUDA kernel and host code:\n\n```c\n__global__ void foo_kernel(int* a, int* b) {\n unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;\n if (threadIdx.x < 40 || threadIdx.x >= 104) {\n b[i] = a[i] + 1;\n }\n if (i % 2 == 0) {\n a[i] = b[i] * 2;\n }\n for (unsigned int j = 0; j < 5 - (i % 3); ++j) {\n b[i] += j;\n }\n}\n\nvoid foo(int* a_d, int* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1)/128, 128>>>(a_d, b_d);\n}\n```\n\nFor the same loop:\n(ii) How many loop iterations (values of j) have divergence somewhere in the grid?", "answer": "2", "explanation": "Iterations j = 3 and j = 4 are executed by only some threads (depending on i%3), so both are divergent.", "topic_tags": ["CUDA", "divergence", "control_flow"]} +{"chapter": 4, "exercise": "2", "type": "short_answer", "question": "Vector addition of length 2000; one output element per thread; thread block size = 512 threads. Using the minimum number of blocks to cover all elements, how many threads are in the grid?", "answer": "2048", "explanation": "Blocks = ceil(2000/512) = 4; threads = 4 x 512 = 2048.", "topic_tags": ["CUDA", "launch_config", "arithmetic"]} +{"chapter": 4, "exercise": "3", "type": "short_answer", "question": " with 2000 elements and 512 threads per block (minimal blocks), how many warps do you expect to have divergence due to the boundary check (threads skipping work past N)?", "answer": "1", "explanation": "Total warps = 2048/32 = 64. Only the warp covering thread indices 1984-2015 has some active (<=1999) and some inactive (>=2000) threads. The final warp (2016-2047) has all threads inactive (no divergence).", "topic_tags": ["CUDA", "divergence", "warps"]} +{"chapter": 4, "exercise": "4", "type": "short_answer", "question": "A block with 8 threads executes a section before a barrier. Times (us) to reach the barrier are: 2.0, 2.3, 3.0, 2.8, 2.4, 1.9, 2.6, 2.9. Threads then wait at the barrier until the slowest arrives. What percentage of the aggregate thread time is spent waiting? Give a percentage to one decimal place; do not include %.", "answer": "17.1", "explanation": "Max time = 3.0. Waiting per thread = (3.0 - t). Sum waits = 4.1 us. Aggregate time = 8 x 3.0 = 24 us. Percentage ~ 4.1/24 ~ 17.1%.", "topic_tags": ["CUDA", "synchronization", "barriers"]} +{"chapter": 4, "exercise": "6", "type": "short_answer", "question": "An SM supports up to 1536 threads and up to 4 blocks concurrently. For a single SM, which block size among {128, 256, 512, 1024} yields the maximum number of resident threads, and how many threads does it schedule? Return the result as a tuple (BLOCK_SIZE, THREADS).", "answer": "(512, 1536)", "explanation": "For 128: min(4, 1536/128)=4 blocks -> 4x128=512 threads. For 256: 4 blocks -> 1024. For 512: 3 blocks (limited by total threads) -> 3x512=1536. For 1024: 1 block -> 1024. Max is 1536 with 512/block.", "topic_tags": ["CUDA", "occupancy", "launch_config"]} +{"chapter": 4, "exercise": "7", "type": "short_answer", "question": "A device allows up to 64 blocks per SM and 2048 threads per SM. For each per-SM assignment below, state if it's possible and the occupancy (% of 2048 threads):\n(a) 8 blocks x 128 threads\n(b) 16 blocks x 64 threads\n(c) 32 blocks x 32 threads\n(d) 64 blocks x 32 threads\n(e) 32 blocks x 64 threads Provide five semicolon-separated tuples (ans,occ) for (a)-(e), where ans is Yes/No and occ is an integer percent with no %.", "answer": "(Yes,50);(Yes,50);(Yes,50);(Yes,100);(Yes,100)", "explanation": "Check blocks \u2264 64 and total threads \u2264 2048 for each case; occupancy = total_threads / 2048.", "topic_tags": ["CUDA", "occupancy", "SM_limits"]} +{"chapter": 4, "exercise": "8", "type": "short_answer", "question": "A GPU has 2048 threads/SM, 32 blocks/SM, and 65,536 registers/SM. For each kernel, can it achieve full occupancy (2048 threads/SM)? If not, what limits it?\n(a) 128 threads/block, 30 registers/thread\n(b) 32 threads/block, 29 registers/thread\n(c) 256 threads/block, 34 registers/thread Provide three semicolon-separated tuples (ans,limit,threads) for (a)-(c), where ans is Yes/No, limit in {none,blocks,registers}, threads is an integer.", "answer": "(Yes,none,2048);(No,blocks,1024);(No,registers,1792)", "explanation": "For each case, first bound by blocks from threads/SM, then check blocks limited by registers per block. Compare resulting resident threads to 2048.", "topic_tags": ["CUDA", "occupancy", "registers", "SM_limits"]} +{"chapter": 4, "exercise": "2a-ii", "type": "short_answer", "question": "We multiply two 8x8 matrices C=AxB on a GPU. One thread computes one C[i,j]. Using shared-memory tiling with tile size T=2 (2x2 tiles), and assuming no caching except shared memory, how many total global-memory element loads (A and B only; ignore the 64 stores of C) are performed for the full multiplication? Give an integer; do not include units or x.", "answer": "2", "explanation": "Baseline (no tiling, T=1): 64 outputs x 8 MACs/output x 2 loads/MAC = 1024 loads. With tiling of size T, loads = (8/T)^2 x (8/T) x 2T^2 = 1024/T. For T=2 -> 1024/2 = 512.", "topic_tags": ["CUDA", "tiling", "matrix_multiplication", "memory_bandwidth"]} +{"chapter": 4, "exercise": "2a-ii", "type": "short_answer", "question": "Consider an 8x8 matrix multiplication with tile size T=2, what is the reduction factor in global-memory traffic versus the naive untiled case (T=1)? Give an integer; do not include units or x.", "answer": "2", "explanation": "Untiled: 1024 loads. T=2: 512 loads. Reduction factor = 1024 / 512 = 2x.", "topic_tags": ["CUDA", "tiling", "matrix_multiplication", "memory_bandwidth"]} +{"chapter": 4, "exercise": "2b-i", "type": "short_answer", "question": "Now use tile size T=4 (4x4 tiles) for an 8x8 matrix multiplication. How many total global-memory element loads (A and B only; ignore C stores) are performed?", "answer": "256", "explanation": "Using loads = 1024/T with N=8, for T=4 we have 1024/4 = 256.", "topic_tags": ["CUDA", "tiling", "matrix_multiplication", "memory_bandwidth"]} +{"chapter": 4, "exercise": "2b-ii", "type": "short_answer", "question": "Consider an 8x8 matrix multiplication with tile size T=4. What is the reduction factor in global-memory traffic relative to the naive untiled case (T=1)? Give an integer; do not include units or x.", "answer": "4", "explanation": "Untiled: 1024 loads. T=4: 256 loads. Reduction factor = 1024 / 256 = 4x, confirming linear scaling with tile dimension.", "topic_tags": ["CUDA", "tiling", "matrix_multiplication", "memory_bandwidth"]} +{"chapter": 5, "exercise": "1", "type": "short_answer", "question": "Consider matrix addition C = A + B. Can shared memory be used to reduce global memory bandwidth consumption for this kernel? Briefly justify your answer.", "answer": "No.", "explanation": "In element-wise addition, each output element C[i,j] depends only on A[i,j] and B[i,j] once. Threads do not reuse neighbors' A or B values, so there is no inter-thread temporal locality to exploit. Caching A/B into shared memory would just add extra copies without reducing global loads.", "topic_tags": ["CUDA", "shared_memory", "data_reuse", "bandwidth"]} +{"chapter": 5, "exercise": "4", "type": "short_answer", "question": "Assume register and shared memory capacities are not limiting. Give one important reason why using shared memory (instead of registers) to hold values fetched from global memory can be valuable.", "answer": "Shared memory enables inter-thread data sharing within a block.", "explanation": "Registers are private to a thread. Shared memory is visible to all threads in a block, so a value fetched once from global memory can be reused by multiple threads, reducing global traffic.", "topic_tags": ["CUDA", "shared_memory", "registers", "data_sharing"]} +{"chapter": 5, "exercise": "5", "type": "short_answer", "question": "For a tiled matrix-matrix multiplication kernel using 32x32 tiles, what is the reduction in global memory bandwidth usage for the input matrices M and N (compared to the untiled naive access), assuming ideal reuse within a tile? Provide the two reduction factors as M,N (integers); do not include x or text.", "answer": "32,32", "explanation": "Within a 32x32 tile, each loaded element of M (resp. N) is reused across 32 multiply-accumulates along the tile dimension, replacing 32 separate global loads in the naive scheme. Thus, global loads per useful use drop by ~32x for both inputs.", "topic_tags": ["CUDA", "tiling", "matrix_multiplication", "bandwidth", "reuse"]} +{"chapter": 5, "exercise": "6", "type": "short_answer", "question": "A CUDA kernel is launched with 1000 thread blocks, each with 512 threads. If a variable is declared as a local (per-thread) variable inside the kernel, how many distinct instances of this variable are created during execution?", "answer": "512,000", "explanation": "Local variables are per-thread. Total threads = 1000 blocks x 512 threads/block = 512,000 instances.", "topic_tags": ["CUDA", "memory_model", "locals", "threads"]} +{"chapter": 5, "exercise": "7", "type": "short_answer", "question": "A CUDA kernel is launched with 1000 thread blocks, each with 512 threads. if a variable is declared in shared memory, how many distinct instances of this variable are created during execution?", "answer": "1,000", "explanation": "Shared memory is per-block. There is exactly one instance per block -> 1000 instances total.", "topic_tags": ["CUDA", "shared_memory", "blocks", "memory_model"]} +{"chapter": 5, "exercise": "8a", "type": "short_answer", "question": "For multiplying two NxN matrices without tiling, how many times is each input element requested from global memory?", "answer": "N times.", "explanation": "In the naive kernel each output element recomputes its dot product by reloading the same row/column elements. Each input element participates in N different outputs along the corresponding dimension, leading to N separate loads.", "topic_tags": ["CUDA", "matrix_multiplication", "naive", "global_memory"]} +{"chapter": 5, "exercise": "8b", "type": "short_answer", "question": "For multiplying two NxN matrices with TxT tiling (ideal reuse within tiles), how many times is each input element requested from global memory?", "answer": "N/T times.", "explanation": "Each input element is fetched once per tile-stripe it participates in along the multiply dimension. Tiling reduces redundant loads by a factor of T, so loads per element drop from N to N/T.", "topic_tags": ["CUDA", "tiling", "matrix_multiplication", "global_memory"]} +{"chapter": 5, "exercise": "9a", "type": "short_answer", "question": "A CUDA kernel performs 36 floating-point operations and 7 global 32-bit (4-byte) memory accesses per thread. On a GPU with peak 200 GFLOP/s compute throughput and 100 GB/s memory bandwidth, is the kernel compute-bound or memory-bound? Justify briefly using a roofline-style argument.", "answer": "Memory-bound.", "explanation": "Arithmetic intensity = 36 FLOPs / (7x4 B) = 36/28 ~ 1.286 FLOP/B. Machine balance = 200 GFLOP/s \u00f7 100 GB/s = 2.0 FLOP/B. Since 1.286 < 2.0, performance is limited by memory bandwidth.", "topic_tags": ["roofline", "arithmetic_intensity", "compute_bound", "memory_bound"]} +{"chapter": 5, "exercise": "9b", "type": "short_answer", "question": "A CUDA kernel performs 36 floating-point operations and 7 global 32-bit (4-byte) memory accesses per thread. On a GPU with peak 300 GFLOP/s compute throughput and 250 GB/s memory bandwidth, is the kernel compute-bound or memory-bound? Justify briefly using a roofline-style argument.", "answer": "Compute-bound.", "explanation": "Arithmetic intensity = 36 FLOPs / (7x4 B) = 36/28 ~ 1.286 FLOP/B. Machine balance = 300 GFLOP/s \u00f7 250 GB/s = 1.2 FLOP/B. Since 1.286 > 1.2, performance is limited by compute throughput.", "topic_tags": ["roofline", "arithmetic_intensity", "compute_bound", "memory_bound"]} +{"chapter": 5, "exercise": "10a", "type": "short_answer", "question": "You are given this tile-transpose kernel (abbrev.):\n\n```cpp\ndim3 blockDim(BLOCK_WIDTH,BLOCK_WIDTH);\ndim3 gridDim(A_width/blockDim.x, A_height/blockDim.y);\nBlockTranspose<<>>(A, A_width, A_height);\n\n__global__ void BlockTranspose(float* A_elements, int A_width, int A_height) {\n __shared__ float blockA[BLOCK_WIDTH][BLOCK_WIDTH];\n int baseIdx = blockIdx.x * BLOCK_WIDTH + threadIdx.x;\n baseIdx += (blockIdx.y * BLOCK_WIDTH + threadIdx.y) * A_width;\n blockA[threadIdx.y][threadIdx.x] = A_elements[baseIdx];\n // (no barrier here)\n A_elements[baseIdx] = blockA[threadIdx.x][threadIdx.y];\n}\n```\nFor which BLOCK_WIDTH values does this execute correctly?", "answer": "Only BLOCK_WIDTH = 1.", "explanation": "Without a barrier, some threads can read from shared memory before peers have written their elements. With a 1x1 block, there is only one thread and no race; for any larger tile, a race exists.", "topic_tags": ["CUDA", "synchronization", "shared_memory", "barriers"]} +{"chapter": 5, "exercise": "10b", "type": "short_answer", "question": "You are given this tile-transpose kernel (abbreviated):\n\n```cpp\ndim3 blockDim(BLOCK_WIDTH, BLOCK_WIDTH);\ndim3 gridDim(A_width / blockDim.x, A_height / blockDim.y);\nBlockTranspose<<>>(A, A_width, A_height);\n\n__global__ void BlockTranspose(float* A_elements, int A_width, int A_height) {\n __shared__ float blockA[BLOCK_WIDTH][BLOCK_WIDTH];\n int baseIdx = blockIdx.x * BLOCK_WIDTH + threadIdx.x;\n baseIdx += (blockIdx.y * BLOCK_WIDTH + threadIdx.y) * A_width;\n blockA[threadIdx.y][threadIdx.x] = A_elements[baseIdx];\n // (no barrier here)\n A_elements[baseIdx] = blockA[threadIdx.x][threadIdx.y];\n}\n```\nExplain the root cause of incorrect execution for BLOCK_WIDTH > 1 and give a minimal fix that makes it correct for any BLOCK_WIDTH >= 1. Give a single token naming the minimal synchronization fix.", "answer": "__syncthreads()", "explanation": "All threads must complete their writes to shared memory before any thread reads `blockA[tx][ty]`. Without a barrier, reads can see stale values (race). Adding `__syncthreads();` between the store and the load enforces correctness for any tile size.", "topic_tags": ["CUDA", "synchronization", "shared_memory", "barriers"]} +{"chapter": 5, "exercise": "11a", "type": "short_answer", "question": "Consider the following CUDA code:\n\n```cpp\n__global__ void foo_kernel(float* a, float* b) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n float x[4];\n __shared__ float y_s;\n __shared__ float b_s[128];\n for (unsigned int j = 0; j < 4; ++j) {\n x[j] = a[j * blockDim.x * gridDim.x + i];\n }\n if (threadIdx.x == 0) { y_s = 7.4f; }\n b_s[threadIdx.x] = b[i];\n __syncthreads();\n b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]\n + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);\n}\n```\nHow many distinct instances of variable `i` exist during execution?", "answer": "1,024", "explanation": "Blocks = (1024 + 127) / 128 = 8; threads per block = 128; total threads = 8 x 128 = 1,024. `i` is per-thread.", "topic_tags": ["CUDA", "locals", "launch_config", "threads"]} +{"chapter": 5, "exercise": "11b", "type": "short_answer", "question": "Consider the following CUDA code:\n\n```cpp\n__global__ void foo_kernel(float* a, float* b) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n float x[4];\n __shared__ float y_s;\n __shared__ float b_s[128];\n for (unsigned int j = 0; j < 4; ++j) {\n x[j] = a[j * blockDim.x * gridDim.x + i];\n }\n if (threadIdx.x == 0) { y_s = 7.4f; }\n b_s[threadIdx.x] = b[i];\n __syncthreads();\n b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]\n + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);\n}\n```\nHow many distinct instances of the local array `x[4]` are created during execution?", "answer": "1,024", "explanation": "`x` is a per-thread local array. With 1,024 threads total, there are 1,024 instances.", "topic_tags": ["CUDA", "locals", "stack_memory", "threads"]} +{"chapter": 5, "exercise": "11c", "type": "short_answer", "question": "Consider the following CUDA code:\n\n```cpp\n__global__ void foo_kernel(float* a, float* b) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n float x[4];\n __shared__ float y_s;\n __shared__ float b_s[128];\n for (unsigned int j = 0; j < 4; ++j) {\n x[j] = a[j * blockDim.x * gridDim.x + i];\n }\n if (threadIdx.x == 0) { y_s = 7.4f; }\n b_s[threadIdx.x] = b[i];\n __syncthreads();\n b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]\n + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);\n}\n```\nHow many distinct instances of the shared variable `y_s` are created during execution?", "answer": "8", "explanation": "Shared memory is per-block. Blocks = (1024 + 127) / 128 = 8 -> 8 instances of `y_s`.", "topic_tags": ["CUDA", "shared_memory", "blocks"]} +{"chapter": 5, "exercise": "11d", "type": "short_answer", "question": "Consider the following CUDA code:\n\n```cpp\n__global__ void foo_kernel(float* a, float* b) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n float x[4];\n __shared__ float y_s;\n __shared__ float b_s[128];\n for (unsigned int j = 0; j < 4; ++j) {\n x[j] = a[j * blockDim.x * gridDim.x + i];\n }\n if (threadIdx.x == 0) { y_s = 7.4f; }\n b_s[threadIdx.x] = b[i];\n __syncthreads();\n b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]\n + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);\n}\n```\nHow many distinct instances of the shared array `b_s[128]` are created during execution?", "answer": "8", "explanation": "Shared arrays are also per-block. With 8 blocks, there are 8 instances of `b_s`.", "topic_tags": ["CUDA", "shared_memory", "blocks"]} +{"chapter": 5, "exercise": "11e", "type": "short_answer", "question": "Consider the following CUDA code:\n\n```cpp\n__global__ void foo_kernel(float* a, float* b) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n float x[4];\n __shared__ float y_s;\n __shared__ float b_s[128];\n for (unsigned int j = 0; j < 4; ++j) {\n x[j] = a[j * blockDim.x * gridDim.x + i];\n }\n if (threadIdx.x == 0) { y_s = 7.4f; }\n b_s[threadIdx.x] = b[i];\n __syncthreads();\n b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]\n + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];\n}\n\nvoid foo(float* a_d, float* b_d) {\n unsigned int N = 1024;\n foo_kernel<<<(N + 128 - 1) / 128, 128>>>(a_d, b_d);\n}\n```\nWhat is the amount of shared memory used per block (in bytes)? Give an integer (bytes); do not include units.", "answer": "516", "explanation": "`y_s`: 1 float = 4 bytes; `b_s[128]`: 128 floats = 512 bytes; total = 516 bytes.", "topic_tags": ["CUDA", "shared_memory", "resources"]} +{"chapter": 5, "exercise": "11f", "type": "short_answer", "question": "Consider the following CUDA code and compute the floating-point-operations-per-byte (OP/B) ratio per thread with respect to global memory traffic (assume 4-byte floats, count both reads and the final write):\n\n```cpp\n__global__ void foo_kernel(float* a, float* b) {\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\n float x[4];\n __shared__ float y_s;\n __shared__ float b_s[128];\n for (unsigned int j = 0; j < 4; ++j) {\n x[j] = a[j * blockDim.x * gridDim.x + i];\n }\n if (threadIdx.x == 0) { y_s = 7.4f; }\n b_s[threadIdx.x] = b[i];\n __syncthreads();\n b[i] = 2.5f*x[0] + 3.7f*x[1] + 6.3f*x[2] + 8.5f*x[3]\n + y_s*b_s[threadIdx.x] + b_s[(threadIdx.x + 3) % 128];\n}\n```\nWhat is the OP/B value? Round to 3 decimals; provide only the numeric value.", "answer": "0.417", "explanation": "Per thread FLOPs = 5 mul + 5 add = 10. Global traffic: 4 reads from `a` (16 B) + 1 read from `b` (4 B) + 1 write to `b` (4 B) = 24 B. OP/B = 10 / 24 ~ 0.417.", "topic_tags": ["CUDA", "operational_intensity", "roofline", "memory_traffic"]} +{"chapter": 5, "exercise": "12a", "type": "short_answer", "question": "GPU limits: 2048 threads/SM, 32 blocks/SM, 65,536 registers/SM, 96 KB shared memory/SM. Kernel uses 64 threads/block, 27 registers/thread, and **4 KB shared memory per block**. Can it reach full occupancy (2048 threads/SM)? If not, what limits it and what is the achieved occupancy? Provide a tuple (Yes|No,limiting_resource,threads) with limiting_resource in {none,shared_memory,blocks,registers}.", "answer": "(No,shared_memory,1536)", "explanation": "Max blocks by threads = 2048/64 = 32 (OK). Registers: 27x2048 = 55,296 < 65,536 (OK). Shared memory: at 4 KB per block, 96 KB/4 KB = 24 blocks fit, so threads = 24x64 = 1,536 -> 75% occupancy. Limiting factor: shared memory per SM.", "topic_tags": ["CUDA", "occupancy", "resources", "shared_memory", "registers"]} +{"chapter": 5, "exercise": "12b", "type": "short_answer", "question": "GPU limits: 2048 threads/SM, 32 blocks/SM, 65,536 registers/SM, 96 KB shared memory/SM. Kernel uses 256 threads/block, 31 registers/thread, and **8 KB shared memory per block**. Can it reach full occupancy (2048 threads/SM)? If not, what limits it and what is the achieved occupancy? Provide a tuple (Yes|No,limiting_resource,threads) with limiting_resource in {none,shared_memory,blocks,registers}.", "answer": "(Yes,none,2048)", "explanation": "Max blocks by threads = 2048/256 = 8. Registers: 31x2048 = 63,488 < 65,536 (OK). Shared memory: 8 blocks x 8 KB = 64 KB < 96 KB (OK). Blocks/SM limit is 32 (not binding). All constraints allow 8 blocks x 256 threads = 2048.", "topic_tags": ["CUDA", "occupancy", "resources", "shared_memory", "registers"]} +{"chapter": 6, "exercise": "2a", "type": "mcq", "question": "A 2D tiled GEMM uses a BLOCK_SIZExBLOCK_SIZE thread block. Threads are indexed (ty, tx). In each phase, threads cooperatively load: M[row, ph*BLOCK_SIZE + tx] and (corner-turned) N[ph*BLOCK_SIZE + ty, col], where row = blockIdx.y*BLOCK_SIZE + ty and col = blockIdx.x*BLOCK_SIZE + tx. Arrays are row-major 4-byte floats. Warps have 32 lanes and lanes vary along x (a warp spans 32 consecutive tx at fixed ty). Which BLOCK_SIZE guarantees that both the M and N loads of every warp are fully coalesced into a single contiguous segment?", "choices": ["A. 8", "B. 16", "C. 32", "D. Any power of two"], "answer": "C", "explanation": "With warps laid out along x, coalescing requires each warp to cover 32 consecutive tx at fixed ty so addresses are contiguous for both loads. BLOCK_SIZE=32 aligns one warp per row; 8 or 16 split a warp across multiple rows.", "topic_tags": ["CUDA", "coalescing", "tiling", "warps", "memory"]} +{"chapter": 6, "exercise": "2b", "type": "mcq", "question": "Using the setup described in a BLOCK_SIZE x BLOCK_SIZE tiled GEMM with corner-turned N load, row-major floats, and 32-lane warps spanning x (BLOCK_SIZExBLOCK_SIZE tiled GEMM, corner-turned N load, row-major floats, warps span x with 32 lanes), if BLOCK_SIZE=16, what is the coalescing behavior of a warp's global loads for M and N?", "choices": ["A. Fully coalesced into a single contiguous segment per warp", "B. Two contiguous segments per warp (warp spans two 16-wide rows)", "C. Uncoalesced/random access", "D. Depends only on base address alignment"], "answer": "B", "explanation": "A 32-lane warp spans two 16-wide rows at fixed ty, so both M and N loads become two 16-element contiguous segments per warp rather than one 32-element segment.", "topic_tags": ["CUDA", "coalescing", "warps", "tiling"]} +{"chapter": 6, "exercise": "3a", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: a[i].", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "A", "explanation": "Consecutive threads in a warp access consecutive elements a[i]; this is the canonical coalesced pattern.", "topic_tags": ["CUDA", "coalescing", "global_memory"]} +{"chapter": 6, "exercise": "3b", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: a_s[threadIdx.x], where a_s is declared in __shared__ memory.", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "C", "explanation": "Coalescing applies to global memory transactions. Shared memory has different banking rules; coalescing classification is not applicable.", "topic_tags": ["CUDA", "shared_memory", "coalescing"]} +{"chapter": 6, "exercise": "3c", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: b[j*blockDim.x*gridDim.x + i] with j fixed inside the loop.", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "A", "explanation": "For a fixed j, threads in a warp access consecutive indices offset by a constant base; addresses are contiguous across the warp.", "topic_tags": ["CUDA", "coalescing", "global_memory"]} +{"chapter": 6, "exercise": "3d", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: c[i*4 + j], with j fixed inside the loop.", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "B", "explanation": "Across a warp, i increases by 1 so addresses stride by 4 elements (16 B) per thread, leading to multiple memory transactions (not contiguous).", "topic_tags": ["CUDA", "coalescing", "strided_access"]} +{"chapter": 6, "exercise": "3e", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: bc_s[j*256 + threadIdx.x], where bc_s is __shared__ memory.", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "C", "explanation": "Shared memory access; global-memory coalescing classification does not apply.", "topic_tags": ["CUDA", "shared_memory"]} +{"chapter": 6, "exercise": "3f", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: a_s[threadIdx.x] (read), where a_s is __shared__ memory.", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "C", "explanation": "Shared memory read; coalescing is a global-memory concept.", "topic_tags": ["CUDA", "shared_memory"]} +{"chapter": 6, "exercise": "3g", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: d[i + 8] (global write).", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "A", "explanation": "The +8 is a constant offset; adjacent threads still write consecutive locations, so the warp issues contiguous transactions.", "topic_tags": ["CUDA", "coalescing", "global_store"]} +{"chapter": 6, "exercise": "3h", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: bc_s[threadIdx.x*4] (read), where bc_s is __shared__ memory.", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "C", "explanation": "Shared memory read; coalescing classification is not applicable (banking rules apply instead).", "topic_tags": ["CUDA", "shared_memory"]} +{"chapter": 6, "exercise": "3i", "type": "mcq", "question": "Classify the following memory access for coalescing on modern CUDA GPUs. Assume: 1D launch; i = blockIdx.x*blockDim.x + threadIdx.x; warps are 32 threads along x; arrays are float* (4 B) in global memory unless noted; shared memory accesses are not subject to global coalescing. Access: e[i*8] (global write).", "choices": ["A. Coalesced", "B. Uncoalesced", "C. Not applicable (shared memory)", "D. Unaligned but still fully coalesced due to caching"], "answer": "B", "explanation": "Thread addresses stride by 8 elements (32 B) per lane across the warp; not a single contiguous segment, so it's uncoalesced.", "topic_tags": ["CUDA", "coalescing", "strided_access", "global_store"]} +{"chapter": 6, "exercise": "4a", "type": "short_answer", "question": "Arithmetic intensity (FLOP/B) for naive GEMM: One thread computes P[i,j] with a loop over k=0..n-1, doing one multiply and one add per k (2 FLOPs) and reading M[i,k] and N[k,j] from global memory each iteration. Assume 4-byte floats and ignore output writes. What is the arithmetic intensity?", "answer": "0.25", "explanation": "Per output: 2n FLOPs and 2n reads x 4 B = 8n B -> (2n)/(8n) = 0.25 FLOP/B.", "topic_tags": ["roofline", "arithmetic_intensity", "GEMM"]} +{"chapter": 6, "exercise": "4b", "type": "short_answer", "question": "Arithmetic intensity (FLOP/B) for tiled GEMM with BLOCK_SIZE=T: Each phase loads one TxT tile of M and one TxT tile of N from global memory and reuses them from shared memory to produce a TxT output tile. Assume 4-byte floats and ignore output writes. For T=32, what is the arithmetic intensity?", "answer": "8", "explanation": "Global reads per output = 2n/T elements -> 8n/T bytes; FLOPs per output = 2n. Intensity = (2n)/(8n/T) = T/4 = 8 for T=32.", "topic_tags": ["roofline", "arithmetic_intensity", "tiling", "GEMM"]} +{"chapter": 6, "exercise": "4c", "type": "short_answer", "question": "Arithmetic intensity (FLOP/B) for tiled GEMM with thread coarsening factor C=4: The M tile loaded from global memory is reused across 4 adjacent output tiles; N tiles are not further reused beyond standard tiling. Assume 4-byte floats and ignore output writes. For T=32, what is the arithmetic intensity?", "answer": "12.8", "explanation": "Reads per output: M contributes n/(4T), N contributes n/T -> (5n)/(4T) elements -> (5n)/T bytes. FLOPs per output = 2n. Intensity = (2n)/((5n)/T) = 2T/5 = 12.8 for T=32.", "topic_tags": ["roofline", "arithmetic_intensity", "tiling", "coarsening", "GEMM"]} +{"chapter": 7, "exercise": "2", "type": "mcq", "question": "Perform 1D discrete convolution with zero-padding (output length equals input length): N = {4, 1, 3, 2, 3}, F = {2, 1, 4}. Use P[i] = sum_{k=0..2} F[k] * N[i - 1 + k], treating out-of-bounds N as 0. What is P?", "choices": ["A. [8, 21, 13, 20, 7]", "B. [4, 12, 17, 14, 9]", "C. [2, 9, 12, 15, 11]", "D. [0, 8, 21, 13, 20]"], "answer": "A", "explanation": "Zero-padding with radius r=1 yields P = [8, 21, 13, 20, 7].", "topic_tags": ["convolution", "1D", "discrete", "zero_padding"]} +{"chapter": 7, "exercise": "3a", "type": "mcq", "question": "In 1D discrete convolution with zero-padding, what operation does the filter [0, 1, 0] primarily perform on a signal x?", "choices": ["A. Identity (pass-through): y[i] = x[i]", "B. Right shift by 1: y[i] = x[i-1]", "C. Left shift by 1: y[i] = x[i+1]", "D. 3-point moving average"], "answer": "A", "explanation": "[0,1,0] preserves the center sample, acting as an identity under the stated assumptions.", "topic_tags": ["convolution", "filters", "signal_processing"]} +{"chapter": 7, "exercise": "3b", "type": "mcq", "question": "In 1D discrete convolution with zero-padding, what operation does the filter [0, 0, 1] primarily perform?", "choices": ["A. Identity (pass-through)", "B. Right shift by 1 sample", "C. Left shift by 1 sample", "D. 3-point moving average"], "answer": "B", "explanation": "With the conventions used here, [0,0,1] produces a right shift by 1 (y[i] ~ x[i-1]).", "topic_tags": ["convolution", "filters", "signal_processing"]} +{"chapter": 7, "exercise": "3c", "type": "mcq", "question": "In 1D discrete convolution with zero-padding, what operation does the filter [1, 0, 0] primarily perform?", "choices": ["A. Right shift by 1 sample", "B. Left shift by 1 sample", "C. Identity (pass-through)", "D. High-pass smoothing"], "answer": "B", "explanation": "With the conventions used here, [1,0,0] produces a left shift by 1 (y[i] ~ x[i+1]).", "topic_tags": ["convolution", "filters", "signal_processing"]} +{"chapter": 7, "exercise": "3d", "type": "mcq", "question": "In 1D discrete convolution, what is the primary effect of the filter [-1/2, 0, 1/2]?", "choices": ["A. Low-pass smoothing", "B. First-derivative (edge detection)", "C. Identity", "D. Right shift by 1"], "answer": "B", "explanation": "It approximates a first derivative, responding to rapid changes (edges).", "topic_tags": ["convolution", "edge_detection", "derivative"]} +{"chapter": 7, "exercise": "3e", "type": "mcq", "question": "In 1D discrete convolution, what is the primary effect of the filter [1/3, 1/3, 1/3]?", "choices": ["A. High-pass edge enhancer", "B. Left shift by 1", "C. 3-point moving average (smoothing)", "D. Identity"], "answer": "C", "explanation": "Equal weights average the local neighborhood, smoothing noise.", "topic_tags": ["convolution", "smoothing", "moving_average"]} +{"chapter": 7, "exercise": "4a", "type": "mcq", "question": "1D convolution on an array of length N with an odd-sized filter of length M = 2r+1 (r = (M-1)/2). How many ghost (zero-padded) cells are there in total?", "choices": ["A. r", "B. 2r", "C. M - 1", "D. N + M"], "answer": "C", "explanation": "Total ghost cells = r on the left + r on the right = 2r = M - 1.", "topic_tags": ["convolution", "ghost_cells", "padding"]} +{"chapter": 7, "exercise": "4b", "type": "mcq", "question": "1D convolution on array length N with odd-sized filter M (using zero-padding, counting multiplications even when reading zeros). How many total multiplications are performed?", "choices": ["A. N \u00d7 M", "B. N \u00d7 (M - 1)", "C. (N - M) \u00d7 M", "D. 2N \u00d7 M"], "answer": "A", "explanation": "Each of N outputs multiplies M taps (zeros included) -> NxM.", "topic_tags": ["convolution", "complexity"]} +{"chapter": 7, "exercise": "5a", "type": "mcq", "question": "2D convolution on an NxN image with an odd-sized MxM filter (M = 2r+1). With zero-padding, how many ghost cells surround the image in total?", "choices": ["A. 4Nr", "B. 2N(M-1)", "C. 4r(N + r)", "D. N^2 - (N - 2r)^2"], "answer": "C", "explanation": "Padding adds r rows/cols around; total ghost cells = 4r(N + r).", "topic_tags": ["convolution", "2D", "ghost_cells"]} +{"chapter": 7, "exercise": "5b", "type": "mcq", "question": "2D convolution on an NxN image with an MxM filter (zero-padding, counting multiplications even on zeros). How many total multiplications are performed?", "choices": ["A. N^2 \u00d7 M^2", "B. N \u00d7 M", "C. (N - M + 1)^2 \u00d7 M^2", "D. 2N \u00d7 M"], "answer": "A", "explanation": "Each of N^2 outputs multiplies M^2 taps -> N^2xM^2.", "topic_tags": ["convolution", "2D", "complexity"]} +{"chapter": 7, "exercise": "6a", "type": "mcq", "question": "2D convolution on an N1xN2 image with an odd-sized M1xM2 filter. Let r1=(M1-1)/2 and r2=(M2-1)/2. With zero-padding, how many ghost cells are there in total?", "choices": ["A. 2(N1 r2 + N2 r1)", "B. 2(N1 r2 + N2 r1) + 4 r1 r2", "C. (N1 + N2)(r1 + r2)", "D. N1 N2 (r1 + r2)"], "answer": "B", "explanation": "Edges contribute 2(N1 r2 + N2 r1); corners add 4 r1 r2.", "topic_tags": ["convolution", "2D", "ghost_cells", "rectangular"]} +{"chapter": 7, "exercise": "6b", "type": "mcq", "question": "2D convolution on an N1xN2 image with an M1xM2 filter (zero-padding, counting multiplications even on zeros). How many total multiplications are performed?", "choices": ["A. N1 N2 M1 M2", "B. N1 N2 (M1 + M2)", "C. (N1 - M1 + 1)(N2 - M2 + 1) M1 M2", "D. 2 N1 N2 M1"], "answer": "A", "explanation": "Each of N1xN2 outputs multiplies M1xM2 taps -> N1 N2 M1 M2.", "topic_tags": ["convolution", "2D", "complexity", "rectangular"]} +{"chapter": 7, "exercise": "7a", "type": "mcq", "question": "A 2D tiled convolution uses an output tile of size TxT and an odd-sized filter with radius r=(M-1)/2. Input tiles are (T+2r)x(T+2r) due to halo. For an NxN output, how many thread blocks are needed?", "choices": ["A. ceil(N/T) \u00d7 ceil(N/T)", "B. (N/T) \u00d7 (N/T) (truncate)", "C. N \u00d7 N", "D. ceil(N/(T+2r)) \u00d7 ceil(N/(T+2r))"], "answer": "A", "explanation": "Each block produces a TxT output tile; tiling the output requires ceil(N/T) in each dimension.", "topic_tags": ["tiled_convolution", "grid_sizing"]} +{"chapter": 7, "exercise": "7b", "type": "mcq", "question": "In a tiled 2D convolution setup where the output tile is TxT and the halo radius is r, the block loads a (T+2r)x(T+2r) input tile into shared memory. Assuming one thread per input-tile element, how many threads are needed per block?", "choices": ["A. T^2", "B. (T+r)^2", "C. (T+2r)^2", "D. 2T(T+r)"], "answer": "C", "explanation": "One thread per input-tile element -> (T+2r)^2 threads per block.", "topic_tags": ["tiled_convolution", "block_size", "threads_per_block"]} +{"chapter": 7, "exercise": "7c", "type": "mcq", "question": "In a tiled 2D convolution setup where the output tile is TxT and the halo radius is r, the block loads a (T+2r)x(T+2r) input tile into shared memory and allocates a shared-memory array to hold this input tile. How much shared memory is needed per block (in bytes) for single-precision floats?", "choices": ["A. T^2 \u00d7 4", "B. (T+2r)^2 \u00d7 4", "C. (T+2r) \u00d7 4", "D. 0"], "answer": "B", "explanation": "The shared tile size is (T+2r)x(T+2r) floats; at 4 bytes/float -> (T+2r)^2 x 4 bytes.", "topic_tags": ["tiled_convolution", "shared_memory"]} +{"chapter": 7, "exercise": "7d1", "type": "mcq", "question": "Consider a 2D convolution implementation that does NOT allocate any shared-memory input tile. Each thread block contains TxT threads, and each thread computes exactly one output element of a TxT output tile. All input reads are served directly from global memory (relying only on hardware caches). For an NxN output, how many thread blocks are required?", "choices": ["A. ceil(N/T) \u00d7 ceil(N/T)", "B. (N/T) \u00d7 (N/T) (truncate)", "C. N \u00d7 N", "D. ceil(N/(T+2r)) \u00d7 ceil(N/(T+2r))"], "answer": "A", "explanation": "Each block covers a TxT region of the output, so the grid needs ceil(N/T) blocks along each dimension.", "topic_tags": ["tiled_convolution", "grid_sizing", "cache_based"]} +{"chapter": 7, "exercise": "7d2", "type": "mcq", "question": "Consider a 2D convolution implementation that does NOT allocate any shared-memory input tile. Each thread block contains TxT threads, and each thread computes exactly one output element of a TxT output tile. All input reads are served directly from global memory (relying only on hardware caches). How many threads are launched per block?", "choices": ["A. T^2", "B. (T+2r)^2", "C. 2T(T+r)", "D. N^2"], "answer": "A", "explanation": "One thread per output element over a TxT tile yields TxT = T^2 threads per block.", "topic_tags": ["tiled_convolution", "threads_per_block", "cache_based"]} +{"chapter": 7, "exercise": "7d3", "type": "mcq", "question": "Consider a 2D convolution implementation that does NOT allocate any shared-memory input tile. Each thread block contains TxT threads, and each thread computes exactly one output element of a TxT output tile. All input reads are served directly from global memory (relying only on hardware caches). How much shared memory is needed per block (in bytes) to hold the input tile when using single-precision floats?", "choices": ["A. (T+2r)^2 \u00d7 4", "B. T^2 \u00d7 4", "C. 0", "D. 2(T+2r)^2 \u00d7 4"], "answer": "C", "explanation": "By definition, this variant allocates no shared-memory input tile; it relies solely on hardware caches.", "topic_tags": ["tiled_convolution", "shared_memory", "cache_based"]} +{"chapter": 8, "exercise": "1a", "type": "short_answer", "question": "A 3D seven-point stencil is applied on a cubic grid of size 120x120x120 (including boundary cells). The kernel only writes interior points (i=1..118, j=1..118, k=1..118). How many output grid points are computed per sweep?", "answer": "1643032", "explanation": "Interior count = (120-2)^3 = 118^3 = 1,643,032.", "topic_tags": ["stencil", "3D", "indexing", "counts"]} +{"chapter": 8, "exercise": "1b", "type": "short_answer", "question": "A basic 3D stencil kernel launches one thread per grid point over a 120x120x120 domain using blocks of size 8x8x8 threads (no overhang trimming). Using ceil division per dimension, how many thread blocks are launched in total?", "answer": "3375", "explanation": "Blocks per dim = ceil(120/8) = 15 -> total blocks = 15^3 = 3,375.", "topic_tags": ["CUDA", "launch_config", "3D", "ceil_div"]} +{"chapter": 8, "exercise": "1c", "type": "short_answer", "question": "A shared-memory tiled 3D stencil uses IN_TILE_DIM = 8 and a radius r = 1, so OUT_TILE_DIM = IN_TILE_DIM - 2r = 6. Over a 120x120x120 domain, blocks are placed per OUT_TILE_DIM using ceil division per dimension. How many thread blocks are launched in total?", "answer": "8000", "explanation": "Blocks per dim = ceil(120/6) = 20 -> total blocks = 20^3 = 8,000.", "topic_tags": ["CUDA", "tiling", "launch_config", "3D"]} +{"chapter": 8, "exercise": "1d", "type": "short_answer", "question": "A coarsened/tiled 3D stencil uses 2D thread blocks of IN_TILE_DIMxIN_TILE_DIM = 32x32 (z handled by coarsening) with radius r = 1, so OUT_TILE_DIM = 30. Over a 120x120x120 domain, blocks are placed on a 3D grid using ceil division by OUT_TILE_DIM in each dimension. How many thread blocks are launched in total?", "answer": "64", "explanation": "Blocks per dim = ceil(120/30) = 4 -> total blocks = 4^3 = 64.", "topic_tags": ["CUDA", "thread_coarsening", "tiling", "launch_config"]} +{"chapter": 8, "exercise": "2a", "type": "short_answer", "question": "A seven-point 3D stencil uses thread blocks of size IN_TILE_DIMxIN_TILE_DIM = 32x32 (z handled by coarsening). The block processes Z_COARSENING = 16 consecutive output z-planes. With radius r = 1, the block must load halo planes before the first and after the last output plane. How many input elements does a single block load over its lifetime? Assume each loaded plane is 32x32 elements.", "answer": "18432", "explanation": "Planes loaded = (16 output) + 2 halo = 18 planes; per plane 32x32=1024 -> 18x1024 = 18,432 elements.", "topic_tags": ["stencil", "thread_coarsening", "data_movement"]} +{"chapter": 8, "exercise": "2b", "type": "short_answer", "question": "Using IN_TILE_DIM=32 and radius r=1 (so OUT_TILE_DIM = 30) with Z_COARSENING = 16, how many output elements does a single block compute over its lifetime? Each output z-plane contributes OUT_TILE_DIM x OUT_TILE_DIM elements.", "answer": "14400", "explanation": "Per plane: 30x30 = 900 outputs; over 16 planes: 900x16 = 14,400 outputs.", "topic_tags": ["stencil", "throughput", "counts"]} +{"chapter": 8, "exercise": "2c", "type": "short_answer", "question": "For a 3D seven-point stencil with IN_TILE_DIM = 32 and radius r = 1 (so OUT_TILE_DIM = 30), and Z_COARSENING = 16, a block loads 18,432 input elements and computes 14,400 output elements. Assume 32-bit floats (4 bytes) and that each output performs 13 FLOPs (7 multiplies + 6 adds). What is the OP/B ratio for reads only? Do not include units; provide a decimal number.", "answer": "2.5390625", "explanation": "FLOPs = 14,400x13 = 187,200. Bytes read = 18,432x4 = 73,728. OP/B = 187,200 / 73,728 ~ 2.5390625.", "topic_tags": ["roofline", "arithmetic_intensity", "stencil"]} +{"chapter": 8, "exercise": "2d", "type": "short_answer", "question": "With IN_TILE_DIM=32 and radius r=1, a coarsened 3D stencil stores three 32x32 tiles in shared memory at once (previous, current, next z-plane). Using 32-bit floats (4 bytes), how much shared memory (bytes) does a block need if register tiling is NOT used?", "answer": "12288", "explanation": "3 tiles x (32x32 elements) x 4 B = 3x1024x4 = 12,288 bytes.", "topic_tags": ["shared_memory", "resource_usage"]} +{"chapter": 8, "exercise": "2e", "type": "short_answer", "question": "With IN_TILE_DIM=32 and radius r=1, if register tiling is used so that only one 32x32 tile is kept in shared memory at a time, how much shared memory (bytes) does a block need? Assume 32-bit floats (4 bytes).", "answer": "4096", "explanation": "1 tile x (32x32 elements) x 4 B = 1024x4 = 4,096 bytes.", "topic_tags": ["shared_memory", "register_tiling", "resource_usage"]} +{"chapter": 9, "exercise": "1", "type": "short_answer", "question": "Assume each atomic operation to a single global memory variable (so updates serialize) has a total latency of 100 ns. What is the maximum throughput for these atomic operations in operations per second (ops/s)?", "answer": "10000000", "explanation": "Throughput = 1 / (100 ns) = 1 / (100x10^-9 s) = 10,000,000 ops/s.", "topic_tags": ["CUDA", "atomics", "latency", "throughput"]} +{"chapter": 9, "exercise": "2", "type": "short_answer", "question": "A GPU supports atomic operations in L2 cache. Each atomic takes 4 ns if it hits in L2 and 100 ns if it goes to DRAM. If 90% of atomic operations hit in L2 and all updates target the same global variable (thus serialize), what is the approximate atomic throughput in ops/s?", "answer": "73529412", "explanation": "Average latency = 0.9x4 ns + 0.1x100 ns = 13.6 ns. Throughput ~ 1 / 13.6 ns ~ 73,529,412 ops/s (rounded).", "topic_tags": ["CUDA", "atomics", "L2-cache", "throughput"]} +{"chapter": 9, "exercise": "3", "type": "short_answer", "question": "A kernel performs 5 floating-point operations per atomic operation. Each atomic operation to a single global variable has latency 100 ns (so atomics serialize). What is the maximum floating-point throughput (in FLOP/s) limited by the atomic throughput?", "answer": "50000000", "explanation": "Atomic throughput = 1/(100 ns) = 10,000,000 ops/s. FLOP/s = 10,000,000 x 5 = 50,000,000.", "topic_tags": ["CUDA", "atomics", "FLOPS", "throughput"]} +{"chapter": 9, "exercise": "4", "type": "short_answer", "question": "A kernel replaces global atomics with shared-memory atomics (1 ns per atomic) and incurs an additional 10% total overhead to accumulate privatized results back to global memory. Each atomic corresponds to 5 floating-point operations. Assuming all updates still effectively serialize, what is the maximum floating-point throughput (in FLOP/s)?", "answer": "4545454545", "explanation": "Effective per-atomic time = 1 ns x 1.1 = 1.1 ns \u21d2 atomic throughput ~ 1/1.1 ns ~ 909,090,909 ops/s. FLOP/s ~ 909,090,909 x 5 ~ 4,545,454,545.", "topic_tags": ["CUDA", "atomics", "shared_memory", "throughput", "FLOPS"]} +{"chapter": 9, "exercise": "5", "type": "mcq", "question": "To atomically add the value of an integer variable Partial to a global-memory integer variable Total, which statement is correct? Use the signature: int atomicAdd(int* address, int val);", "choices": ["A. atomicAdd(Total, 1);", "B. atomicAdd(&Total, &Partial);", "C. atomicAdd(Total, &Partial);", "D. atomicAdd(&Total, Partial);"], "answer": "D", "explanation": "atomicAdd expects a pointer to the destination and a by-value increment: atomicAdd(&Total, Partial).", "topic_tags": ["CUDA", "atomics", "API"]} +{"chapter": 10, "exercise": "1", "type": "short_answer", "question": "Consider the CUDA kernel below and assume a single block with blockDim.x = 512 threads, warpSize = 32, and an input array of 1024 elements (each thread initially handles two elements via i = 2*threadIdx.x). During the 5th iteration of the loop (i.e., stride = 16), how many warps in the block have control-flow divergence?\n\n```cpp\n__global__ void simple_sum_reduction_kernel(float* input, float* output){\n unsigned int i = 2 * threadIdx.x;\n for (unsigned int stride = 1; stride <= blockDim.x; stride *= 2){\n if (threadIdx.x % stride == 0){\n input[i] += input[i + stride];\n }\n __syncthreads();\n }\n if (threadIdx.x == 0)\n *output = input[0];\n}\n```\n", "answer": "16", "explanation": "There are 512 threads -> 16 warps. At stride=16, in each warp exactly 2 of 32 lanes execute (threads whose threadIdx.x is a multiple of 16), so every warp diverges -> 16 warps.", "topic_tags": ["CUDA", "reduction", "warp_divergence", "control_flow"]} +{"chapter": 10, "exercise": "2", "type": "short_answer", "question": "Consider the CUDA kernel below and assume a single block with blockDim.x = 512 threads, warpSize = 32, and an input array of 1024 elements. During the 5th iteration (strides: 512, 256, 128, 64, **32**), how many warps have control-flow divergence?\n\n```cpp\n__global__ void ConvergentSumReductionKernel(float* input, float* output) {\n unsigned int i = threadIdx.x;\n for (unsigned int stride = blockDim.x; stride >= 1; stride /= 2) {\n if (threadIdx.x < stride) {\n input[i] += input[i + stride];\n }\n __syncthreads();\n }\n if(threadIdx.x == 0) {\n *output = input[0];\n }\n}\n```\n", "answer": "0", "explanation": "At stride=32, exactly lanes 0-31 (one full warp) are active and all take the same branch; other warps are fully inactive. No warp mixes taken/not-taken paths -> 0 divergent warps.", "topic_tags": ["CUDA", "reduction", "warp_divergence", "branching"]} +{"chapter": 10, "exercise": "3", "type": "mcq", "question": "You want a *reversed* convergent access pattern where active threads add from lower indices (i - stride) instead of higher (i + stride). Starting from the kernel shown below, pick the option that correctly applies this change **without altering the iteration order**. Assume the input has 1024 elements and blockDim.x=512.\n\nOriginal kernel:\n```cpp\n__global__ void ConvergentSumReductionKernel(float* input, float* output) {\n unsigned int i = threadIdx.x;\n for (unsigned int stride = blockDim.x; stride >= 1; stride /= 2) {\n if (threadIdx.x < stride) {\n input[i] += input[i + stride];\n }\n __syncthreads();\n }\n if (threadIdx.x == 0) {\n *output = input[0];\n }\n}\n```\nChoices (showing only the lines to change):\nA. `unsigned int i = threadIdx.x + blockDim.x;` and inside the loop `if (blockDim.x - threadIdx.x <= stride) { input[i] += input[i - stride]; }` and at the end `if (threadIdx.x == blockDim.x - 1) *output = input[i];`\nB. Keep `unsigned int i = threadIdx.x;` but change the loop body to `if (threadIdx.x < stride) { input[i] += input[i - stride]; }`\nC. `unsigned int i = threadIdx.x + blockDim.x;` and keep the original condition `if (threadIdx.x < stride) { input[i] += input[i - stride]; }`\nD. Keep the original kernel unchanged (already reversed by iteration order).\n", "choices": ["A", "B", "C", "D"], "answer": "A", "explanation": "To read i-stride safely, initialize i one blockDim.x ahead and mirror the active-lane condition. A ensures in-bounds access and preserves convergence; B/C can access negative indices or mis-activate lanes; D does not implement reversal.", "topic_tags": ["CUDA", "reduction", "indexing", "memory_access"]} +{"chapter": 10, "exercise": "4", "type": "mcq", "question": "You need to modify a block-level sum reduction to compute the **maximum** instead. Assume the GPU supports `atomicMax(float*, float)` and the usual `fmax` intrinsic. Which change set is **correct**?\n\nA. Replace all `+`/`+=` with `fmax(..., ...)` in both the per-thread coarsening stage and the shared-memory tree, and replace the final `atomicAdd(output, value)` with `atomicMax(output, value)`.\nB. Replace `+` with `fmax` only in the shared-memory tree; keep `atomicAdd` at the end.\nC. Keep the tree as sum; at the end do `*output = fmax(*output, blockMax)` without atomics.\nD. Compute max on the host after copying partial sums from device.\n", "choices": ["A", "B", "C", "D"], "answer": "A", "explanation": "Max must be used consistently in *all* accumulation phases, and the final cross-block combine must be an atomic max. B mixes ops; C is racy; D changes the algorithmic contract.", "topic_tags": ["CUDA", "reduction", "atomics", "parallel_algorithms"]} +{"chapter": 10, "exercise": "5", "type": "mcq", "question": "Consider a coarsened block-level reduction where each thread sums multiple elements spaced by `BLOCK_DIM`. The input length `N` may not be a multiple of the tile span. Which guard pattern prevents out-of-bounds reads while preserving correctness?\n\nA.\n```cpp\nfloat sum = 0.0f;\nif (i < N) {\n sum = input[i];\n for (unsigned int t = 1; t < COARSE_FACTOR*2; ++t)\n if (i + t*BLOCK_DIM < N) sum += input[i + t*BLOCK_DIM];\n}\n```\nB.\n```cpp\nfloat sum = input[i];\nfor (unsigned int t = 1; t < COARSE_FACTOR*2; ++t)\n if (i + t*BLOCK_DIM < N) sum += input[i + t*BLOCK_DIM];\n```\nC.\n```cpp\nfloat sum = 0.0f;\nif (i <= N) {\n sum = input[i];\n for (unsigned int t = 1; t <= COARSE_FACTOR*2; ++t)\n sum += input[i + t*BLOCK_DIM];\n}\n```\nD.\n```cpp\nfloat sum = 0.0f;\nif (i < N) sum = input[i];\nfor (unsigned int t = 1; t < COARSE_FACTOR*2; ++t)\n sum += input[i + t*BLOCK_DIM];\n```\n", "choices": ["A", "B", "C", "D"], "answer": "A", "explanation": "A guards both the base element and each strided read. B can read input[i] out of bounds when i \u2265 N. C uses off-by-one bounds and unguarded inner reads. D leaves inner reads unguarded.", "topic_tags": ["CUDA", "bounds_checking", "coarsening", "memory_safety"]} +{"chapter": 11, "exercise": "1", "type": "mcq", "question": "A single CUDA block of 8 threads performs an inclusive prefix scan on X = [4, 6, 7, 1, 2, 8, 5, 2] using the classic Kogge-Stone pattern. Strides run with values s = 1, 2, 4. For each stride, every thread tid >= s adds the value from position tid - s that was produced in the previous stride, while threads with tid < s keep their current value. After completing all three strides, which vector remains in X?", "choices": ["A. [4, 10, 17, 18, 20, 28, 33, 35]", "B. [4, 10, 13, 8, 10, 10, 13, 7]", "C. [4, 6, 7, 1, 2, 8, 5, 2]", "D. [4, 6, 13, 14, 16, 26, 31, 33]"], "answer": "A", "explanation": "Stride 1 adds neighbors offset by 1, stride 2 adds neighbors offset by 2 drawing from the stride-1 results, and stride 4 adds neighbors offset by 4. The cumulative effect is the inclusive prefix sums [4,10,17,18,20,28,33,35].", "topic_tags": ["CUDA", "scan", "kogge-stone", "prefix_sum"]} +{"chapter": 11, "exercise": "2", "type": "short_answer", "question": "For an inclusive Kogge-Stone scan across N = 2048 elements, the theoretical number of floating-point additions is given by N * log2(N) - (N - 1). Compute this value and provide it as an integer.", "answer": "20481", "explanation": "log2(2048) = 11, so 2048 * 11 = 22528 and subtracting (2048 - 1) = 2047 gives 22528 - 2047 = 20481 additions.", "topic_tags": ["CUDA", "scan", "operation_count"]} +{"chapter": 11, "exercise": "3", "type": "mcq", "question": "In the Kogge-Stone scan kernel shown below, assume blockDim.x = 1024 and warp size = 32.\\n\\n```cpp\\n__global__ void koggeStone(float *X, float *Y, unsigned int N) {\\n __shared__ float XY[1024];\\n unsigned int i = blockIdx.x * blockDim.x + threadIdx.x;\\n if (i < N) {\\n XY[threadIdx.x] = X[i];\\n } else {\\n XY[threadIdx.x] = 0.0f;\\n }\\n for (unsigned int stride = 1; stride < blockDim.x; stride *= 2) {\\n __syncthreads();\\n float temp;\\n if (threadIdx.x >= stride) {\\n temp = XY[threadIdx.x] + XY[threadIdx.x - stride];\\n }\\n __syncthreads();\\n if (threadIdx.x >= stride) {\\n XY[threadIdx.x] = temp;\\n }\\n }\\n if (i < N) {\\n Y[i] = XY[threadIdx.x];\\n }\\n}\\n```\\n\\nFor how many stride values does control divergence occur within warp 0?", "choices": ["A. 3", "B. 5", "C. 16", "D. 32"], "answer": "B", "explanation": "Warp 0 contains threads 0-31. Divergence happens while stride is 1, 2, 4, 8, or 16 because only a subset of that warp satisfies threadIdx.x >= stride. Once stride reaches 32 or larger, either all warp-0 threads are inactive or all evaluate the branch the same way.", "topic_tags": ["CUDA", "scan", "control_divergence", "warps"]} +{"chapter": 11, "exercise": "4", "type": "short_answer", "question": "A Kogge-Stone scan processes N = 2048 elements using two blocks of 1024 threads. Within each block, the numbers of active threads at strides s = 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 are 1023, 1022, 1020, 1016, 1008, 992, 960, 896, 768, and 512 respectively. Each active thread performs one addition per stride. After summing both blocks and adding 1024 more additions to propagate block sums, how many additions are executed in total? Provide the integer result.", "answer": "19458", "explanation": "Per block the additions sum to 1023+1022+1020+1016+1008+992+960+896+768+512 = 9217. Two blocks contribute 2 * 9217 = 18434 additions, and adding 1024 for inter-block propagation yields 19458.", "topic_tags": ["CUDA", "scan", "operation_count", "blocks"]} +{"chapter": 11, "exercise": "5", "type": "short_answer", "question": "A Brent-Kung scan on N = 2048 elements uses one block of 1024 threads. During the reduction tree phase, the active thread counts per stride are 1024, 512, 256, 128, 64, 32, 16, 8, 4, 2, 1. During the reverse tree phase, the active counts per stride are 1, 4, 7, 15, 31, 63, 127, 254, 509, 1019. If each active thread executes one addition at its stride, how many additions occur in total across both phases? Provide the integer total.", "answer": "4077", "explanation": "The reduction phase contributes 1024+512+256+128+64+32+16+8+4+2+1 = 2047 additions. The reverse phase contributes 1+4+7+15+31+63+127+254+509+1019 = 2030 additions. The combined total is 2047 + 2030 = 4077.", "topic_tags": ["CUDA", "scan", "brent-kung", "operation_count"]} +{"chapter": 12, "exercise": "1", "type": "mcq", "question": "The function below implements co_rank for merging two sorted arrays using zero-based indices.\n\n```cpp\nint co_rank(int k, const int* A, int m, const int* B, int n) {\n int i = (k < m) ? k : m;\n int j = k - i;\n int i_low = (0 > (k - n)) ? 0 : (k - n);\n int j_low = (0 > (k - m)) ? 0 : (k - m);\n while (true) {\n if (i > 0 && j < n && A[i - 1] > B[j]) {\n int delta = (i - i_low + 1) >> 1;\n j_low = j;\n j += delta;\n i -= delta;\n } else if (j > 0 && i < m && B[j - 1] >= A[i]) {\n int delta = (j - j_low + 1) >> 1;\n i_low = i;\n i += delta;\n j -= delta;\n } else {\n return i;\n }\n }\n}\n```\n\nLet A = [1, 7, 8, 9, 10], B = [7, 10, 10, 12], and let C be the merged array of length m + n. For k = 8 (the start of the output suffix C[8:]), which pair (i, j) = (co_rank(k), k - co_rank(k)) does the function compute?", "choices": ["A. (4, 4)", "B. (5, 3)", "C. (8, 0)", "D. (3, 5)"], "answer": "B", "explanation": "The merged array is [1,7,7,8,9,10,10,12]. Position k = 8 is past all five elements taken from A, so co_rank returns i = 5. The remaining offset is j = k - i = 3.", "topic_tags": ["merge", "co_rank", "CUDA", "parallel_algorithms"]} +{"chapter": 12, "exercise": "4a", "type": "short_answer", "question": "Two sorted arrays of lengths 1030400 and 608000 are merged on the GPU using a basic co-rank kernel. Each thread is assigned exactly 8 output elements (elementsPerThread = 8), and every thread performs two binary searches via co_rank before calling a sequential merge. How many threads in total execute the binary searches when the entire merge is launched? Provide the integer count.", "answer": "204800", "explanation": "The merged output has 1030400 + 608000 = 1638400 elements. Dividing by 8 elements per thread gives 1638400 / 8 = 204800 threads, and every thread runs both co_rank searches.", "topic_tags": ["merge", "co_rank", "work_partitioning"]} +{"chapter": 12, "exercise": "4b", "type": "short_answer", "question": "The tiled merge kernel splits work by assigning exactly 8 output elements to each thread within a 1024-thread block. Before processing its tile, thread 0 in every block computes co_rank for the block's starting and ending output indices. Using the same arrays (1030400 and 608000 elements) and 8 elements merged per thread, how many threads in total perform binary searches against global memory? Provide the integer count.", "answer": "200", "explanation": "Each 1024-thread block merges 8 x 1024 = 8192 elements. The total output length 1638400 divided by 8192 gives 200 blocks. Only thread 0 in each block issues the two co_rank calls, so 200 threads perform binary searches.", "topic_tags": ["merge", "tiled_merge", "co_rank", "CUDA"]} +{"chapter": 13, "exercise": "1", "type": "mcq", "question": "A benchmark with N = 10,000,000 elements produced the timings below (in milliseconds) and GPU speedups versus a CPU quicksort baseline:\n\n- Naive Parallel Radix Sort: time = 14.099, speedup = 149.70x\n- Memory-coalesced GPU sort: time = 18.055, speedup = 116.90x\n- Memory-coalesced GPU sort (multiradix): time = 20.671, speedup = 102.11x\n- Memory-coalesced GPU sort (multiradix + thread coarsening): time = 39.275, speedup = 53.74x\n- GPU merge sort: time = 135.828, speedup = 15.54x\n\nWhich method delivered the highest speedup?", "choices": ["A. Memory-coalesced GPU sort", "B. Memory-coalesced GPU sort (multiradix)", "C. Memory-coalesced GPU sort (multiradix + thread coarsening)", "D. Naive Parallel Radix Sort"], "answer": "D", "explanation": "149.70x is the largest reported speedup, achieved by the Naive Parallel Radix Sort entry.", "topic_tags": ["sorting", "radix_sort", "performance", "CUDA"]} +{"chapter": 13, "exercise": "2", "type": "short_answer", "question": "Using the benchmark data above, compute the difference in reported speedup between Naive Parallel Radix Sort (149.70x) and GPU merge sort (15.54x). Give the result rounded to two decimal places as a decimal number (e.g., 12.34).", "answer": "134.16", "explanation": "149.70 - 15.54 = 134.16 when rounded to two decimal places.", "topic_tags": ["sorting", "performance", "speedup"]} +{"chapter": 13, "exercise": "3", "type": "mcq", "question": "A single-kernel variant of the parallel radix sort attempts to synchronize across the entire grid to reuse the total count of zero bits. Beyond roughly 100,000 elements it deadlocks. Why is this approach fragile?", "choices": ["A. Because GPU quicksort launches interfere with the kernel", "B. Because the first block must wait for the last block without a device-wide synchronization primitive", "C. Because the radix counters overflow 32-bit integers", "D. Because thread coarsening increases register pressure beyond limits"], "answer": "B", "explanation": "The README notes that the single-kernel try stalls because the first block needs the global zero count from the last block, effectively requiring a grid-wide barrier that the vanilla CUDA launch cannot provide.", "topic_tags": ["radix_sort", "synchronization", "CUDA", "pitfalls"]} +{"chapter": 14, "exercise": "1", "type": "mcq", "question": "Consider the 4x4 sparse matrix\n\n[ [1, 0, 7, 0],\n [0, 0, 8, 0],\n [0, 4, 3, 0],\n [2, 0, 0, 1] ].\n\nUsing zero-based indices and listing nonzeros in row-major order, which triple of arrays encodes this matrix in COO format (row indices, column indices, values)?", "choices": ["A. row=[0,0,1,2,3,3,3], col=[0,2,2,1,0,2,3], val=[1,7,8,4,2,3,1]", "B. row=[0,0,1,2,2,3,3], col=[0,2,2,1,2,0,3], val=[1,7,8,4,3,2,1]", "C. row=[0,1,1,2,2,3,3], col=[0,2,3,0,2,0,3], val=[1,7,8,4,3,2,1]", "D. row=[0,0,2,2,3,3,3], col=[0,2,1,2,0,2,3], val=[1,7,4,3,2,1,8]"], "answer": "B", "explanation": "Row-major enumeration yields nonzeros (0,0),(0,2),(1,2),(2,1),(2,2),(3,0),(3,3) with values [1,7,8,4,3,2,1]; option B matches this ordering exactly.", "topic_tags": ["spmv", "sparse_formats", "COO", "CUDA"]} +{"chapter": 14, "exercise": "2", "type": "short_answer", "question": "The same 4x4 matrix is stored in CSR format using zero-based indexing and row-major ordering of nonzeros. What is the CSR row pointer array rowPtr? Provide it as a tuple (r0,r1,r2,r3,r4).", "answer": "(0,2,3,5,7)", "explanation": "Row lengths are [2,1,2,2]; the prefix sums give rowPtr = [0,2,3,5,7].", "topic_tags": ["spmv", "sparse_formats", "CSR"]} +{"chapter": 14, "exercise": "3", "type": "mcq", "question": "You know only that a sparse matrix has m rows, n columns, and z nonzeros. Which statement correctly identifies the additional information needed to determine memory usage for different formats?", "choices": ["A. COO storage cannot be determined from m, n, z because row indices depend on padding information.", "B. CSR needs the maximum row length to size its row pointer array.", "C. Both ELL and JDS require the maximum number of nonzeros in any row (and related row ordering details) to size their auxiliary arrays.", "D. JDS needs only m, n, z because the iterPtr array always has length m + 1."], "answer": "C", "explanation": "COO and CSR can be sized directly from z and m (rowPtr has length m+1). ELL and JDS must know the longest row to size padding/iterPtr, and JDS also needs the row permutation order.", "topic_tags": ["spmv", "sparse_formats", "ELL", "JDS"]} +{"chapter": 15, "exercise": "1", "type": "mcq", "question": "A scale-free graph with 20,000 vertices was benchmarked using several BFS variants. The reported runtimes (milliseconds) and GPU speedups versus a sequential baseline were:\n\n- Sequential BFS: 10.26 ms (reference)\n- Push Vertex-Centric BFS: 1.44 ms (7.12x)\n- Pull Vertex-Centric BFS: 0.44 ms (23.53x)\n- Edge-Centric BFS: 0.19 ms (54.81x)\n- Frontier-based BFS: 2.49 ms (4.11x)\n- Optimized Frontier-based BFS: 2.60 ms (3.95x)\n- Direction-Optimized BFS: 0.48 ms (21.54x)\n\nWhich variant achieved the highest reported speedup?", "choices": ["A. Pull Vertex-Centric BFS", "B. Edge-Centric BFS", "C. Direction-Optimized BFS", "D. Push Vertex-Centric BFS"], "answer": "B", "explanation": "Edge-Centric BFS reached 54.81x, larger than the pull (23.53x), direction-optimized (21.54x), or push (7.12x) variants.", "topic_tags": ["BFS", "performance", "graphs", "CUDA"]} +{"chapter": 15, "exercise": "2", "type": "short_answer", "question": "For the scale-free graph with 10,000 vertices, the benchmark reported Sequential BFS = 4.71 ms and Direction-Optimized BFS = 0.35 ms. What is the absolute time saved (Sequential minus Direction-Optimized) in milliseconds, rounded to two decimals?", "answer": "4.36", "explanation": "Time saved = 4.71 ms - 0.35 ms = 4.36 ms.", "topic_tags": ["BFS", "performance", "speedup"]} +{"chapter": 15, "exercise": "3", "type": "mcq", "question": "Consider a directed graph with vertices 0..7 and adjacency lists:\n0->{5,2}, 1->{4}, 2->{3}, 3->{6}, 4->{}, 5->{1,7}, 6->{}, 7->{4,6}.\nA level-synchronous vertex-centric push BFS launches one thread per vertex each iteration. Starting from source vertex 0, iteration 1 visits {0}; iteration 2 processes frontier {5,2}; iteration 3 processes frontier {1,7,3}. During iteration 3, how many threads actually iterate over their vertex's neighbor list?", "choices": ["A. 1", "B. 2", "C. 3", "D. 8"], "answer": "C", "explanation": "Iteration 3's frontier contains three vertices (1, 7, 3); only their threads traverse neighbor lists, while the remaining five threads are idle.", "topic_tags": ["BFS", "vertex_centric", "parallel_threads"]} +{"chapter": 16, "exercise": "1", "type": "short_answer", "question": "The pooling forward routine below processes one feature map with stride K and averages when pooling_type is \"avg\":\n\n```cpp\nvoid poolingLayer_forward(int M, int H, int W, int K, float* Y, float* S, const char* pooling_type) {\n for (int m = 0; m < M; m++)\n for (int h = 0; h < H/K; h++)\n for (int w = 0; w < W/K; w++) {\n if (strcmp(pooling_type, \"max\") == 0)\n S[m*(H/K)*(W/K) + h*(W/K) + w] = -FLT_MAX;\n else\n S[m*(H/K)*(W/K) + h*(W/K) + w] = 0.0f;\n for (int p = 0; p < K; p++)\n for (int q = 0; q < K; q++) {\n float val = Y[m*H*W + (K*h + p)*W + (K*w + q)];\n if (strcmp(pooling_type, \"max\") == 0) {\n if (val > S[m*(H/K)*(W/K) + h*(W/K) + w])\n S[m*(H/K)*(W/K) + h*(W/K) + w] = val;\n } else {\n S[m*(H/K)*(W/K) + h*(W/K) + w] += val / (K*K);\n }\n }\n }\n}\n```\n\nTake M=1, H=4, W=4, K=2, pooling_type=\"avg\", and the input feature map\nY = [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]].\nWhat value does the kernel store at output position (h=0, w=0)? Provide the result rounded to two decimals.", "answer": "3.50", "explanation": "The 2x2 window [[1,2],[5,6]] averages to 3.5, so the stored value is 3.50.", "topic_tags": ["pooling", "average", "cuda", "forward_pass"]} +{"chapter": 16, "exercise": "2", "type": "short_answer", "question": "For input size [8, 64, 128, 128], the custom max pooling benchmark reports 82.433 ms while PyTorch reports 9.454 ms. Compute the speedup of PyTorch relative to the custom kernel (custom_time / pytorch_time) and round to two decimals.", "answer": "8.72", "explanation": "Speedup = 82.433 / 9.454 = 8.719..., which rounds to 8.72.", "topic_tags": ["pooling", "performance", "benchmark"]} +{"chapter": 16, "exercise": "3", "type": "mcq", "question": "The CUDA kernel below unrolls an input tensor X into X_unroll for convolution.\n\n```cpp\n__global__ void unroll_Kernel(int C, int H, int W, int K, float* X, float* X_unroll) {\n int t = blockIdx.x * blockDim.x + threadIdx.x;\n int H_out = H - K + 1;\n int W_out = W - K + 1;\n int W_unroll = H_out * W_out;\n if (t < C * W_unroll) {\n int c = t / W_unroll;\n int w_unroll = t % W_unroll;\n int h_out = w_unroll / W_out;\n int w_out = w_unroll % W_out;\n int w_base = c * K * K;\n for (int p = 0; p < K; p++)\n for (int q = 0; q < K; q++) {\n int h_unroll = w_base + p*K + q;\n X_unroll[h_unroll * W_unroll + w_unroll] = X[c * H * W + (h_out + p) * W + (w_out + q)];\n }\n }\n}\n```\n\nConsider consecutive thread indices t and t+1 that map to the same channel (same c) and satisfy w_out < W_out - 1. For a fixed (p, q) iteration, what is the difference between their global memory read addresses in X?", "choices": ["A. 0", "B. 1", "C. W", "D. K*K"], "answer": "B", "explanation": "With c and h_out identical and w_out incremented by one, the linearized address in X increases by exactly 1 element, yielding coalesced reads.", "topic_tags": ["convolution", "im2col", "memory_coalescing", "cuda"]} +{"chapter": 17, "exercise": "1", "type": "short_answer", "question": "Consider the loop-fission version of the FHD computation (indices are zero-based):\n\n```cpp\nfor (int m = 0; m < M; m++) {\n rMu[m] = rPhi[m]*rD[m] + iPhi[m]*iD[m];\n iMu[m] = rPhi[m]*iD[m] - iPhi[m]*rD[m];\n}\nfor (int m = 0; m < M; m++) {\n for (int n = 0; n < N; n++) {\n rFhD[n] += rMu[m]*cArg(m,n) - iMu[m]*sArg(m,n);\n iFhD[n] += iMu[m]*cArg(m,n) + rMu[m]*sArg(m,n);\n }\n}\n```\n\nAssume M = 4 and N = 3. Before the very first update to rFhD occurs in the second loop, how many times have the statements computing rMu and iMu (lines 2-3) executed? Give the integer count.", "answer": "4", "explanation": "The first loop completes all M iterations before the second loop starts, so lines 2-3 execute once for each m = 0,1,2,3, totalling 4 executions.", "topic_tags": ["loop_fission", "execution_order", "MRI"]} +{"chapter": 17, "exercise": "2", "type": "mcq", "question": "The original loop nests iterate with n outer and m inner:\n\n```cpp\nfor (int n = 0; n < N; n++) {\n for (int m = 0; m < M; m++) {\n body(n, m);\n }\n}\n```\n\nAfter loop interchange, the outer loop runs over m first. For M = 2 and N = 3, what is the sequence of (n,m) pairs visited in the **interchanged** version?", "choices": ["A. (0,0), (1,0), (2,0), (0,1), (1,1), (2,1)", "B. (0,0), (0,1), (0,2), (1,0), (1,1), (1,2)", "C. (0,0), (0,1), (1,0), (1,1), (2,0), (2,1)", "D. (0,0), (0,1), (1,1), (1,2), (0,2), (1,0)"], "answer": "A", "explanation": "With m outermost, we hold m=0 and sweep n=0..2, then set m=1 and sweep n=0..2, giving (0,0),(0,1),(0,2),(1,0),(1,1),(1,2).", "topic_tags": ["loop_interchange", "execution_order", "MRI"]} +{"chapter": 17, "exercise": "3", "type": "mcq", "question": "A CUDA kernel assigns each thread a unique n index:\n\n```cpp\nint n = blockIdx.x * FHD_THREADS_PER_BLOCK + threadIdx.x;\nfloat xn = x[n];\nfor (int m = 0; m < M; m++) {\n float term = kx[m] * xn;\n // ... uses ky[m], kz[m], rMu[m], iMu[m]\n}\n```\n\nWhich statement best explains why caching kx[n] in a register would be useless for this kernel?", "choices": ["A. kx[n] equals zero for all n, so caching offers no benefit.", "B. Threads need kx values indexed by m inside the loop, not by n, so kx[n] is never referenced.", "C. kx is stored in shared memory already, making registers redundant.", "D. Registers cannot hold floating-point values across loop iterations."], "answer": "B", "explanation": "The loop repeatedly accesses kx[m] for m = 0..M-1; the value kx[n] is never used, so loading it would waste instructions and registers.", "topic_tags": ["memory_access", "CUDA", "MRI"]} +{"chapter": 18, "exercise": "1", "type": "short_answer", "question": "The host routine below launches a gather kernel in chunks of constant memory. Assume `CHUNK_SIZE = 256`.\n\n```cpp\nvoid cenergyParallelGather(float* energygrid, dim3 grid_dim, float gridspacing, float z,\n const float* host_atoms, int numatoms) {\n int num_chunks = (numatoms + CHUNK_SIZE - 1) / CHUNK_SIZE;\n for (int chunk = 0; chunk < num_chunks; ++chunk) {\n int start_atom = chunk * CHUNK_SIZE;\n int atoms_in_chunk = (start_atom + CHUNK_SIZE <= numatoms) ? CHUNK_SIZE\n : (numatoms - start_atom);\n // cudaMemcpyToSymbol atoms_in_chunk elements and launch kernel\n }\n}\n```\n\nIf `numatoms = 1000`, how many times will the body of the loop (and thus the constant-memory copy) execute? Give the integer count.", "answer": "4", "explanation": "num_chunks = ceil(1000 / 256) = (1000 + 255) / 256 = 1255 / 256 = 4 when using integer division with the bias term.", "topic_tags": ["cuda", "gather", "chunking"]} +{"chapter": 18, "exercise": "2", "type": "mcq", "question": "For a coarsening factor of 8, the README compares one iteration of the original gather kernel (original gather kernel) to one iteration of the coarsened kernel (coarsened kernel). The counts per outer iteration are:\n- original gather kernel: 32 memory loads, 88 floating-point operations, 8 branch evaluations.\n- coarsened kernel: 11 memory loads, 61 floating-point operations, 17 branch evaluations.\n\nWhich statement best summarizes the trade-off introduced by thread coarsening?", "choices": ["A. Memory loads decrease, arithmetic decreases, but branch count rises.", "B. Memory loads increase, arithmetic increases, and branch count drops.", "C. Memory loads and arithmetic both decrease to zero, so only branches remain.", "D. All three categories (loads, arithmetic, branches) grow because more work is assigned to each thread."], "answer": "A", "explanation": "Coarsening reduces global loads (32->11) and arithmetic (88->61) but increases the number of branch evaluations (8->17).", "topic_tags": ["thread_coarsening", "performance", "cuda"]} +{"chapter": 18, "exercise": "3", "type": "mcq", "question": "Section 18.3 notes two drawbacks of increasing the work per CUDA thread (high thread coarsening). Which pair captures those concerns?", "choices": ["A. Higher register usage can cut occupancy, and excessive coarsening can leave many cores idle.", "B. Device memory becomes read-only, and shared memory can no longer be allocated.", "C. Warp size expands beyond 32, and thread blocks stop synchronizing correctly.", "D. Atomic operations become mandatory, and constant memory broadcasts no longer function."], "answer": "A", "explanation": "More work per thread generally consumes more registers (reducing occupancy) and risks underutilizing parallelism if too few threads remain active.", "topic_tags": ["thread_coarsening", "occupancy", "gpu_architecture"]} +{"chapter": 20, "exercise": "1a", "type": "short_answer", "question": "A 25-point stencil operates on a 64x64x2048 grid that is decomposed along z across 17 MPI ranks: 16 compute ranks plus 1 data server. Each compute rank receives 128 consecutive z-slices (ignoring halos). During Stage 2 of the described two-stage exchange (Stage 1: 4-slice boundaries; Stage 2: interior slices) the internal compute ranks process the interior z-slices after the four-slice boundaries are done in Stage 1. How many interior grid points does one internal compute rank update in Stage 2? Provide the integer count.", "answer": "376320", "explanation": "Stage 1 covers four front and four back slices, leaving 120 interior slices. Interior work = 56x56x120 = 376,320 points per internal rank.", "topic_tags": ["MPI", "stencil", "domain_decomposition"]} +{"chapter": 20, "exercise": "1b", "type": "mcq", "question": "Using the same 64x64x2048 grid split evenly over 16 compute ranks (plus 1 data server), how many halo grid points must an internal compute rank exchange in Stage 2 of the described two-stage exchange (Stage 1: 4-slice boundaries; Stage 2: interior slices)? Assume four halo slices on each side and single-precision floats.", "choices": ["A. 16,384 points (65,536 bytes)", "B. 32,768 points (131,072 bytes)", "C. 65,536 points (262,144 bytes)", "D. 524,288 points (2,097,152 bytes)"], "answer": "B", "explanation": "Each side requires 4x64x64 = 16,384 points; two sides double that to 32,768 points ~ 131 KB when using 4-byte floats.", "topic_tags": ["MPI", "halo_exchange", "stencil"]} +{"chapter": 20, "exercise": "2", "type": "mcq", "question": "The call `MPI_Send(ptr_a, 1000, MPI_FLOAT, 2000, 4, MPI_COMM_WORLD)` transmits 4,000 bytes. What is the size of each element?", "choices": ["A. 1 byte", "B. 2 bytes", "C. 4 bytes", "D. 8 bytes"], "answer": "C", "explanation": "4,000 byte total / 1,000 elements = 4 bytes per element, consistent with MPI_FLOAT.", "topic_tags": ["MPI", "message_passing", "data_types"]} +{"chapter": 20, "exercise": "3", "type": "mcq", "question": "Which MPI statement is correct?", "choices": ["A. MPI_Send is the only nonblocking send routine.", "B. MPI_Recv blocks until a matching message arrives.", "C. MPI messages must be at least 128 bytes.", "D. Separate MPI ranks share the same global memory space."], "answer": "B", "explanation": "MPI_Recv is a blocking receive; the other statements are false.", "topic_tags": ["MPI", "semantics", "blocking"]} +{"chapter": 21, "exercise": "1", "type": "mcq", "question": "The parent kernel below launches a child kernel for each Bezier line.\n\n```cpp\ntypedef float2 float2;\nstruct BezierLine {\n float2 CP[3];\n float2 *vertexPos;\n int nVertices;\n};\n__global__ void computeBezierLines_parent(BezierLine *lines, int nLines) {\n int lidx = threadIdx.x + blockDim.x * blockIdx.x;\n if (lidx < nLines) {\n float curvature = 1.0f; // assume curvature chosen elsewhere\n lines[lidx].nVertices = min(max((int)(curvature * 16.0f), 4), 1024);\n cudaMalloc((void**)&lines[lidx].vertexPos, lines[lidx].nVertices * sizeof(float2));\n int childBlocks = (lines[lidx].nVertices + 31) / 32;\n computeBezierLine_child<<>>(lidx, lines, lines[lidx].nVertices);\n }\n}\n```\nIf `nLines = 1024` and the parent is launched with `computeBezierLines_parent<<<16, 64>>>(...)`, how many child kernels are launched in total?", "choices": ["A. 16", "B. 32", "C. 256", "D. 1024"], "answer": "D", "explanation": "Each of the 1024 parent threads (16 blocks x 64 threads) passes the `lidx < nLines` check and launches exactly one child grid, so 1024 child kernels execute.", "topic_tags": ["cuda", "dynamic_parallelism", "bezier"]} +{"chapter": 21, "exercise": "2", "type": "short_answer", "question": "For a launch with (`nLines = 1024`, `<<<16,64>>>`), suppose each parent thread creates a non-blocking CUDA stream before launching `computeBezierLine_child` and destroys it afterwards. How many streams are created across the entire parent launch? Provide the integer count.", "answer": "1024", "explanation": "One stream is created per parent thread, and 16x64 = 1024 parent threads execute, so 1024 streams are created and later destroyed.", "topic_tags": ["cuda", "dynamic_parallelism", "streams"]} +{"chapter": 21, "exercise": "3", "type": "mcq", "question": "A quadtree recursively subdivides a square region until each leaf contains at most one of 64 equidistant points. Including the root node, what is the maximum depth of the quadtree?", "choices": ["A. 4", "B. 8", "C. 16", "D. 64"], "answer": "A", "explanation": "Depth 0 holds all 64 points. Each subdivision splits into 4 equal tiles. After three splits (depths 1-3) there are 64 leaves with one point each, so the maximum depth including the root is 4.", "topic_tags": ["quadtree", "dynamic_parallelism", "depth"]} +{"chapter": 21, "exercise": "4", "type": "mcq", "question": "In that quadtree, every node at depths 0, 1, and 2 launches a child kernel for each of its four quadrants. How many parent nodes issue a child-kernel launch in total?", "choices": ["A. 4", "B. 16", "C. 21", "D. 64"], "answer": "C", "explanation": "Launching parents include the root (1 node), all depth-1 nodes (4 nodes), and all depth-2 nodes (16 nodes); 1 + 4 + 16 = 21 parent launches.", "topic_tags": ["quadtree", "dynamic_parallelism", "kernel_launches"]}