Download the Jupyter Notebook for this section: post_estimation.ipynb

Post-Estimation Tutorial

[1]:
%matplotlib inline

import pyblp
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

pyblp.options.digits = 2
pyblp.options.verbose = False
pyblp.__version__
[1]:
'0.7.0'

This tutorial covers several features of pyblp which are available after estimation including:

  1. Calculating elasticities and diversion ratios.

  2. Calculating marginal costs and markups.

  3. Computing the effects of mergers: prices, shares, and HHI.

  4. Using a parametric bootstrap to estimate standard errors.

  5. Estimating optimal instruments.

Problem Results

As in the fake cereal tutorial, we’ll first solve the fake cereal problem from Nevo (2000). We load the fake data and estimate the model as in the previous tutorial. We output the setup of the model to confirm we have correctly configured the Problem

[2]:
product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)
agent_data = pd.read_csv(pyblp.data.NEVO_AGENTS_LOCATION)
product_formulations = (
   pyblp.Formulation('0 + prices', absorb='C(product_ids)'),
   pyblp.Formulation('1 + prices + sugar + mushy')
)
agent_formulation = pyblp.Formulation('0 + income + income_squared + age + child')
problem = pyblp.Problem(product_formulations, product_data, agent_formulation, agent_data)
problem
[2]:
Dimensions:
=================================================
 T    N     F    I     K1    K2    D    MD    ED
---  ----  ---  ----  ----  ----  ---  ----  ----
94   2256   5   1880   1     4     4    20    1
=================================================

Formulations:
===================================================================
       Column Indices:           0           1           2      3
-----------------------------  ------  --------------  -----  -----
 X1: Linear Characteristics    prices
X2: Nonlinear Characteristics    1         prices      sugar  mushy
       d: Demographics         income  income_squared   age   child
===================================================================

We’ll solve the problem in the same way as before. The Problem.solve method returns a ProblemResults class, which displays basic estimation results. The results that are displayed are simply formatted information extracted from various class attributes such as ProblemResults.sigma and ProblemResults.sigma_se.

[3]:
initial_sigma = np.diag([0.3302, 2.4526, 0.0163, 0.2441])
initial_pi = [
  [ 5.4819,  0,      0.2037,  0     ],
  [15.8935, -1.2000, 0,       2.6342],
  [-0.2506,  0,      0.0511,  0     ],
  [ 1.2650,  0,     -0.8091,  0     ]
]
bfgs = pyblp.Optimization('bfgs')
results = problem.solve(
    initial_sigma,
    initial_pi,
    optimization=bfgs,
    method='1s'
)
results
[3]:
Problem Results Summary:
========================================================================================================================
                                                                                                   Smallest    Largest
Computation  GMM   Optimization   Objective   Fixed Point  Contraction  Objective    Gradient      Hessian     Hessian
   Time      Step   Iterations   Evaluations  Iterations   Evaluations    Value    Infinity Norm  Eigenvalue  Eigenvalue
-----------  ----  ------------  -----------  -----------  -----------  ---------  -------------  ----------  ----------
 00:02:06     1         51           58          46236       143503     +4.6E+00     +6.9E-06      +2.8E-05    +1.6E+04
========================================================================================================================

Nonlinear Coefficient Estimates (Robust SEs in Parentheses):
=====================================================================================================================
Sigma:      1         prices      sugar       mushy     |   Pi:      income    income_squared     age        child
------  ----------  ----------  ----------  ----------  |  ------  ----------  --------------  ----------  ----------
  1      +5.6E-01    +0.0E+00    +0.0E+00    +0.0E+00   |    1      +2.3E+00      +0.0E+00      +1.3E+00    +0.0E+00
        (+1.6E-01)                                      |          (+1.2E+00)                  (+6.3E-01)
                                                        |
prices               +3.3E+00    +0.0E+00    +0.0E+00   |  prices   +5.9E+02      -3.0E+01      +0.0E+00    +1.1E+01
                    (+1.3E+00)                          |          (+2.7E+02)    (+1.4E+01)                (+4.1E+00)
                                                        |
sugar                            -5.8E-03    +0.0E+00   |  sugar    -3.8E-01      +0.0E+00      +5.2E-02    +0.0E+00
                                (+1.4E-02)              |          (+1.2E-01)                  (+2.6E-02)
                                                        |
mushy                                        +9.3E-02   |  mushy    +7.5E-01      +0.0E+00      -1.4E+00    +0.0E+00
                                            (+1.9E-01)  |          (+8.0E-01)                  (+6.7E-01)
=====================================================================================================================

Beta Estimates (Robust SEs in Parentheses):
==========
  prices
----------
 -6.3E+01
(+1.5E+01)
==========

Additional post-estimation outputs can be computed with ProblemResults methods.

Elasticities and Diversion Ratios

We can estimate elasticities, \(\varepsilon\), and diversion ratios, \(\mathscr{D}\), with ProblemResults.compute_elasticities and ProblemResults.compute_diversion_ratios.

As a reminder, elasticities in each market are

(1)\[\varepsilon_{jk} = \frac{x_k}{s_j}\frac{\partial s_j}{\partial x_k}.\]

Diversion ratios are

(2)\[\mathscr{D}_{jk} = -\frac{\partial s_k}{\partial x_j} \Big/ \frac{\partial s_j}{\partial x_j}.\]

Following Conlon and Mortimer (2018), we report the diversion to the outside good \(D_{j0}\) on the diagonal instead of \(D_{jj}=-1\).

[4]:
elasticities = results.compute_elasticities()
diversions = results.compute_diversion_ratios()

Post-estimation outputs are computed for each market and stacked. We’ll use matplotlib functions to display the matrices associated with a single market.

[5]:
single_market = product_data['market_ids'] == 'C01Q1'
plt.colorbar(plt.matshow(elasticities[single_market]));
../../_images/_notebooks_tutorial_post_estimation_9_0.png
[6]:
plt.colorbar(plt.matshow(diversions[single_market]));
../../_images/_notebooks_tutorial_post_estimation_10_0.png

The diagonal of the first image consists of own elasticities and the diagonal of the second image consists of diversion ratios to the outside good. As one might expect, own price elasticities are large and negative while cross-price elasticities are positive but much smaller.

Elasticities and diversion ratios can be computed with respect to variables other than prices with the name argument of ProblemResults.compute_elasticities and ProblemResults.compute_diversion_ratios. Additionally, ProblemResults.compute_long_run_diversion_ratios can be used to used to understand substitution when products are eliminated from the choice set.

The convenience methods ProblemResults.extract_diagonals and ProblemResults.extract_diagonal_means can be used to extract information about own elasticities of demand from elasticity matrices.

[7]:
means = results.extract_diagonal_means(elasticities)

An alternative to summarizing full elasticity matrices is to use ProblemResults.compute_aggregate_elasticities to estimate aggregate elasticities of demand, \(E\), in each market, which reflect the change in total sales under a proportional sales tax of some factor.

[8]:
aggregates = results.compute_aggregate_elasticities(factor=0.1)

Since demand for an entire product category is generally less elastic than the average elasticity of individual products, mean own elasticities are generally larger in magnitude than aggregate elasticities.

[9]:
plt.hist(
    [means.flatten(), aggregates.flatten()],
    color=['red', 'blue'],
    bins=50
);
plt.legend(['Mean Own Elasticities', 'Aggregate Elasticities']);
../../_images/_notebooks_tutorial_post_estimation_16_0.png

Marginal Costs and Markups

To compute marginal costs, \(c\), the product_data passed to Problem must have had a firm_ids field. Since we included firm IDs when configuring the problem, we can use ProblemResults.compute_costs.

[10]:
costs = results.compute_costs()
plt.hist(costs, bins=50);
plt.legend(["Marginal Costs"]);
../../_images/_notebooks_tutorial_post_estimation_18_0.png

Other methods that compute supply-side outputs often compute marginal costs themselves. For example, ProblemResults.compute_markups will compute marginal costs when estimating markups, \(\mathscr{M}\), but computation can be sped up if we just use our pre-computed values.

[11]:
markups = results.compute_markups(costs=costs)
plt.hist(markups, bins=50);
plt.legend(["Markups"]);
../../_images/_notebooks_tutorial_post_estimation_20_0.png

Mergers

Before computing post-merger outputs, we’ll supplement our pre-merger markups with some other outputs. We’ll compute Herfindahl-Hirschman Indices, \(\text{HHI}\), with ProblemResults.compute_hhi; population-normalized gross expected profits, \(\pi\), with ProblemResults.compute_profits; and population-normalized consumer surpluses, \(\text{CS}\), with ProblemResults.compute_consumer_surpluses.

[12]:
hhi = results.compute_hhi()
profits = results.compute_profits(costs=costs)
cs = results.compute_consumer_surpluses()

To compute post-merger outputs, we’ll create a new set of firm IDs that represent a merger of firms 2 and 1.

[13]:
product_data['merger_ids'] = product_data['firm_ids'].replace(2, 1)

We can use ProblemResults.compute_approximate_prices or ProblemResults.compute_prices to estimate post-merger prices. The first method, which is discussed, for example, in Nevo (1997), assumes that shares and their price derivatives are unaffected by the merger. The second method does not make these assumptions and iterates over the \(\zeta\)-markup equation from Morrow and Skerlos (2011) to solve the full system of \(J_t\) equations and \(J_t\) unknowns in each market \(t\). We’ll use the latter, since it is fast enough for this example problem.

[14]:
changed_prices = results.compute_prices(
    firm_ids=product_data['merger_ids'],
    costs=costs
)

If the problem was configured with more than two columns of firm IDs, we could estimate post-merger prices for the other mergers with the firms_index argument, which is by default 1.

We’ll compute post-merger shares with ProblemResults.compute_shares.

[15]:
changed_shares = results.compute_shares(changed_prices)

Post-merger prices and shares are used to compute other post-merger outputs. For example, \(\text{HHI}\) increases.

[16]:
changed_hhi = results.compute_hhi(
    firm_ids=product_data['merger_ids'],
    shares=changed_shares
)
plt.hist(changed_hhi - hhi, bins=50);
plt.legend(["HHI Changes"]);
../../_images/_notebooks_tutorial_post_estimation_30_0.png

Markups, \(\mathscr{M}\), and profits, \(\pi\), generally increase as well.

[17]:
changed_markups = results.compute_markups(changed_prices, costs)
plt.hist(changed_markups - markups, bins=50);
plt.legend(["Markup Changes"]);
../../_images/_notebooks_tutorial_post_estimation_32_0.png
[18]:
changed_profits = results.compute_profits(changed_prices, changed_shares, costs)
plt.hist(changed_profits - profits, bins=50);
plt.legend(["Profit Changes"]);
../../_images/_notebooks_tutorial_post_estimation_33_0.png

On the other hand, consumer surpluses, \(\text{CS}\), generally decrease.

[19]:
changed_cs = results.compute_consumer_surpluses(changed_prices)
plt.hist(changed_cs - cs, bins=50);
plt.legend(["Consumer Surplus Changes"]);
../../_images/_notebooks_tutorial_post_estimation_35_0.png

Bootstrapping Results

Post-estimation outputs can be informative, but they don’t mean much without a sense sample-to-sample variability. One way to estimate confidence intervals for post-estimation outputs is with a standard bootstrap procedure:

  1. Construct a large number of bootstrap samples by sampling with replacement from the original product data.

  2. Initialize and solve a Problem for each bootstrap sample.

  3. Compute the desired post-estimation output for each bootstrapped ProblemResults and from the resulting empirical distribution, construct boostrap confidence intervals.

Although appealing because of its simplicity, the computational resources required for this procedure are often prohibatively expensive. Furthermore, human oversight of the optimization routine is often required to determine whether the routine ran into any problems and if it successfully converged. Human oversight of estimation for each bootstrapped problem is usually not feasible.

A more reasonable alternative is a parametric bootstrap procedure:

  1. Construct a large number of draws from the estimated joint distribution of parameters.

  2. Compute the implied mean utility, \(\delta\), and shares, \(s\), for each draw. If a supply side was estimated, also computed the implied marginal costs, \(c\), and prices, \(p\).

  3. Compute the desired post-estimation output under each of these parametric bootstrap samples. Again, from the resulting empirical distribution, construct boostrap confidence intervals.

Compared to the standard bootstrap procedure, the parametric bootstrap requires far fewer computational resources, and is simple enough to not require human oversight of each bootstrap iteration. The primary complication to this procedure is that when supply is estimated, equilibrium prices and shares need to be computed for each parametric bootstrap sample by iterating over the \(\zeta\)-markup equation from Morrow and Skerlos (2011). Although nontrivial, this fixed point iteration problem is much less demanding than the full optimization routine required to solve the BLP problem from the start.

An empirical distribution of results computed according to this parametric bootstrap procedure can be created with the ProblemResults.bootstrap method, which returns a BootstrappedResults class that can be used just like ProblemResults to compute various post-estimation outputs. The difference is that BootstrappedResults methods return arrays with an extra first dimension, along which bootstrapped results are stacked.

We’ll construct 90% parametric bootstrap confidence intervals for estimated mean own elasticities in each market of the fake cereal problem. Usually, bootstrapped confidence intervals should be based on thousands of draws, but we’ll only use a few for the sake of speed in this example.

[20]:
bootstrapped_results = results.bootstrap(draws=100, seed=0)
bootstrapped_results
[20]:
Bootstrapped Problem Results Summary:
======================
Computation  Bootstrap
   Time        Draws
-----------  ---------
 00:00:24       100
======================
[21]:
bounds = np.percentile(
    bootstrapped_results.extract_diagonal_means(
        bootstrapped_results.compute_elasticities()
    ),
    q=[10, 90],
    axis=0
)
table = pd.DataFrame(index=problem.unique_market_ids, data={
    'Lower Bound': bounds[0].flatten(),
    'Mean Own Elasticity': aggregates.flatten(),
    'Upper Bound': bounds[1].flatten()
})
table.round(2).head()
[21]:
Lower Bound Mean Own Elasticity Upper Bound
C01Q1 -14.06 -0.81 10.66
C01Q2 -8.97 -0.69 7.54
C03Q1 -11.64 -0.55 7.98
C03Q2 -14.46 -0.60 6.47
C04Q1 -12.00 -0.68 5.61

Optimal Instruments

Given a consistent estimate of \(\theta\), we may want to compute the optimal instruments of Chamberlain (1987) and use them to re-solve the problem. Optimal instruments have been shown, for example, by Reynaert and Verboven (2014), to reduce bias, improve efficiency, and enhance stability of BLP estimates.

The ProblemResults.compute_optimal_instruments method computes the expected Jacobians that comprise the optimal instruments by integrating over the density of \(\xi\) (and \(\omega\) if a supply side was estimated). By default, the method approximates this integral by averaging over the Jacobian realizations computed under draws from the asymptotic normal distribution of the error terms. Since this process is computationally expensive and often doesn’t make much of a difference, we’ll use method='approximate' in this example to simply evaluate the Jacobians at the expected value of \(\xi\), zero.

[22]:
instrument_results = results.compute_optimal_instruments(method='approximate')
instrument_results
[22]:
Optimal Instrument Results Summary:
=======================
Computation  Error Term
   Time        Draws
-----------  ----------
 00:00:01        1
=======================

We can use the OptimalInstrumentResults.to_problem method to re-create the fake cereal problem with the estimated optimal excluded instruments.

[23]:
updated_problem = instrument_results.to_problem()
updated_problem
[23]:
Dimensions:
=================================================
 T    N     F    I     K1    K2    D    MD    ED
---  ----  ---  ----  ----  ----  ---  ----  ----
94   2256   5   1880   1     4     4    14    1
=================================================

Formulations:
===================================================================
       Column Indices:           0           1           2      3
-----------------------------  ------  --------------  -----  -----
 X1: Linear Characteristics    prices
X2: Nonlinear Characteristics    1         prices      sugar  mushy
       d: Demographics         income  income_squared   age   child
===================================================================

We can solve this updated problem just like the original one. We’ll start at our consistent estimate of \(\theta\).

[24]:
updated_results = updated_problem.solve(
    results.sigma,
    results.pi,
    optimization=pyblp.Optimization('bfgs'),
    method='1s'
)
updated_results
[24]:
Problem Results Summary:
========================================================================================================================
                                                                                                   Smallest    Largest
Computation  GMM   Optimization   Objective   Fixed Point  Contraction  Objective    Gradient      Hessian     Hessian
   Time      Step   Iterations   Evaluations  Iterations   Evaluations    Value    Infinity Norm  Eigenvalue  Eigenvalue
-----------  ----  ------------  -----------  -----------  -----------  ---------  -------------  ----------  ----------
 00:01:56     1         45           53          47406       146847     +5.7E+00     +2.1E-06      +3.0E-12    +1.5E+04
========================================================================================================================

Nonlinear Coefficient Estimates (Robust SEs in Parentheses):
=====================================================================================================================
Sigma:      1         prices      sugar       mushy     |   Pi:      income    income_squared     age        child
------  ----------  ----------  ----------  ----------  |  ------  ----------  --------------  ----------  ----------
  1      +2.2E-01    +0.0E+00    +0.0E+00    +0.0E+00   |    1      +6.2E+00      +0.0E+00      +1.0E-01    +0.0E+00
        (+8.1E-02)                                      |          (+5.4E-01)                  (+2.2E-01)
                                                        |
prices               +3.1E+00    +0.0E+00    +0.0E+00   |  prices   +4.5E+01      -2.8E+00      +0.0E+00    +3.3E+00
                    (+7.0E-01)                          |          (+9.3E+01)    (+4.9E+00)                (+2.4E+00)
                                                        |
sugar                            -5.8E-03    +0.0E+00   |  sugar    -2.8E-01      +0.0E+00      +3.7E-02    +0.0E+00
                                   (NA)                 |          (+3.6E-02)                  (+1.6E-02)
                                                        |
mushy                                        +9.5E-01   |  mushy    +5.1E-01      +0.0E+00      -1.8E-01    +0.0E+00
                                            (+3.0E-01)  |          (+2.7E-01)                  (+2.3E-01)
=====================================================================================================================

Beta Estimates (Robust SEs in Parentheses):
==========
  prices
----------
 -2.9E+01
(+4.5E+00)
==========