Download the Jupyter Notebook for this section: `parallel.ipynb`

# Parallel Processing Example¶

```
[1]:
```

```
import pyblp
import pandas as pd
pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
```

```
[1]:
```

```
'0.10.1'
```

In this example, we’ll use parallel processing to compute elasticities market-by-market for a simple Logit problem configured with some of the fake cereal data from Nevo (2000).

```
[2]:
```

```
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
formulation = pyblp.Formulation('0 + prices', absorb='C(product_ids)')
problem = pyblp.Problem(formulation, product_data)
results = problem.solve()
results
```

```
[2]:
```

```
Problem Results Summary:
==========================================
GMM Objective Clipped Weighting Matrix
Step Value Shares Condition Number
---- --------- ------- ----------------
2 +1.9E+02 0 +5.7E+07
==========================================
Cumulative Statistics:
========================
Computation Objective
Time Evaluations
----------- -----------
00:00:00 2
========================
Beta Estimates (Robust SEs in Parentheses):
==========
prices
----------
-3.0E+01
(+1.0E+00)
==========
```

```
[3]:
```

```
pyblp.options.verbose = True
with pyblp.parallel(2):
elasticities = results.compute_elasticities()
```

```
Starting a pool of 2 processes ...
Started the process pool after 00:00:00.
Computing elasticities with respect to prices ...
Finished after 00:00:04.
Terminating the pool of 2 processes ...
Terminated the process pool after 00:00:00.
```

Solving a Logit problem does not require market-by-market computation, so parallelization does not change its estimation procedure. Although elasticity computation does happen market-by-market, this problem is very small, so in this small example there are no gains from parallelization.

If the problem were much larger, running Problem.solve and ProblemResults.compute_elasticities under the `with`

statement could substantially speed up estimation and elasticity computation.