This package is in beta. The API may change in future versions. Please use the GitHub issue tracker to report bugs or to request features.
PyBLP is a Python 3 implementation of routines for estimating the demand for differentiated products with BLP-type random coefficients logit models. This package was created by Jeff Gortmaker in collaboration with Chris Conlon.
Development of the package has been guided by the work of many researchers and practitioners. For a full list of references, including the original work of Berry, Levinsohn, and Pakes (1995), refer to the references section of the documentation.
The PyBLP package has been tested on Python versions 3.6, 3.7, and 3.8. The SciPy instructions for installing related packages is a good guide for how to install a scientific Python environment. A good choice is the Anaconda Distribution, since it comes packaged with the following PyBLP dependencies: NumPy, SciPy, SymPy, and Patsy. For absorption of high dimension fixed effects, PyBLP also depends on its companion package PyHDFE, which will be installed when PyBLP is installed.
However, PyBLP may not work with old versions of its dependencies. You can update PyBLP’s Anaconda dependencies with:
condaupdatenumpyscipysympypatsy
You can update PyHDFE with:
pipinstall--upgradepyhdfe
You can install the current release of PyBLP with pip:
pipinstallpyblp
You can upgrade to a newer release with the --upgrade flag:
pipinstall--upgradepyblp
If you lack permissions, you can install PyBLP in your user directory with the --user flag:
pipinstall--userpyblp
Alternatively, you can download a wheel or source archive from PyPI. You can find the latest development code on GitHub and the latest development documentation here.
Once installed, PyBLP can be incorporated into projects written in many other languages with the help of various tools that enable interoperability with Python.
For example, the reticulate package makes interacting with PyBLP in R straightforward:
The following sections provide a very brief overview of the BLP model and how it is estimated. This goal is to concisely introduce the notation and terminology used throughout the rest of the documentation. For a more in-depth overview, refer to Conlon and Gortmaker (2020).
There are \(t = 1, 2, \dotsc, T\) markets, each with \(j = 1, 2, \dotsc, J_t\) products produced by \(f = 1, 2, \dotsc, F_t\) firms, for a total of \(N\) products across all markets. There are \(i = 1, 2, \dotsc, I_t\) individuals/agents who choose among the \(J_t\) products and an outside good \(j = 0\). These numbers also represent sets. For example, \(J_t = \{1, 2, \dots, J_t\}\).
Observed demand-side product characteristics are contained in the \(N \times K_1\) matrix of linear characteristics, \(X_1\), and the \(N \times K_2\) matrix of nonlinear characteristics, \(X_2\), which is typically a subset of \(X_1\). Unobserved demand-side product characteristics, \(\xi\), are a \(N \times 1\) vector.
In market \(t\), observed agent characteristics are a \(I_t \times D\) matrix called demographics, \(d\). Unobserved agent characteristics are a \(I_t \times K_2\) matrix, \(\nu\).
The indirect utility of agent \(i\) from purchasing product \(j\) in market \(t\) is
The \(K_1 \times 1\) vector of demand-side linear parameters, \(\beta\), is partitioned into two components: \(\alpha\) is a \(K_1^\text{en} \times 1\) vector of parameters on the \(N \times K_1^\text{en}\) submatrix of endogenous characteristics, \(X_1^\text{en}\), and \(\beta^\text{ex}\) is a \(K_1^\text{ex} \times 1\) vector of parameters on the \(N \times K_1^\text{ex}\) submatrix of exogenous characteristics, \(X_1^\text{ex}\). Usually, \(X_1^\text{en} = p\), prices, so \(\alpha\) is simply a scalar.
The agent-specific portion of utility in a single market is, in vector-matrix form,
The model incorporates both observable (demographics) and unobservable taste heterogeneity though random coefficients. For the unobserved heterogeneity, we let \(\nu\) denote independent draws from the standard normal distribution. These are scaled by a \(K_2 \times K_2\) lower-triangular matrix \(\Sigma\), which denotes the Cholesky root of the covariance matrix for unobserved taste heterogeneity. The \(K_2 \times D\) matrix \(\Pi\) measures how agent tastes vary with demographics.
In the above expression, random coefficients are assumed to be normally distributed. To incorporate one or more lognormal random coefficients, the associated columns in the parenthesized expression can be exponentiated before being pre-multiplied by \(X_2\). For example, this allows for the coefficient on price to be lognormal so that demand slopes down for all agents. For lognormal random coefficients, a constant column is typically included in \(d\) so that its coefficients in \(\Pi\) parametrize the means of the logs of the random coefficients.
Random idiosyncratic preferences, \(\epsilon_{ijt}\), are assumed to be Type I Extreme Value, so that conditional on the heterogeneous coefficients, market shares follow the well-known logit form. Aggregate market shares are obtained by integrating over the distribution of individual heterogeneity. They are approximated with Monte Carlo integration or quadrature rules defined by the \(I_t \times K_2\) matrix of integration nodes, \(\nu\), and an \(I_t \times 1\) vector of integration weights, \(w\):
There is a one in the denominator because the utility of the outside good is normalized to \(U_{i0t} = 0\). The scale of utility is normalized by the variance of \(\epsilon_{ijt}\).
Observed supply-side product characteristics are contained in the \(N \times K_3\) matrix of supply-side characteristics, \(X_3\). Prices cannot be supply-side characteristics, but non-price product characteristics often overlap with the demand-side characteristics in \(X_1\) and \(X_2\). Unobserved supply-side product characteristics, \(\omega\), are a \(N \times 1\) vector.
Firm \(f\) chooses prices in market \(t\) to maximize the profits of its products \(J_{ft} \subset J_t\):
Here, \(\mathscr{H}\) denotes the market-level ownership or product holdings matrix in the market, where \(\mathscr{H}_{jk}\) is typically \(1\) if the same firm produces products \(j\) and \(k\), and \(0\) otherwise.
To include a supply side, we must specify a functional form for marginal costs:
A demand side is always estimated but including a supply side is optional. With only a demand side, there are three sets of parameters to be estimated: \(\beta\) (which may include \(\alpha\)), \(\Sigma\) and \(\Pi\). With a supply side, there is also \(\gamma\). The linear parameters, \(\beta\) and \(\gamma\), are typically concentrated out of the problem. The exception is \(\alpha\), which cannot be concentrated out when there is a supply side because it is needed to compute demand derivatives and hence marginal costs. Linear parameters that are not concentrated out along with unknown nonlinear parameters in \(\Sigma\) and \(\Pi\) are collectively denoted \(\theta\).
in which \(q(\theta)\) is the GMM objective. By default, PyBLP scales this value by \(N\) so that objectives across different problem sizes are comparable. This behavior can be disabled. In some of the BLP literature and in earlier versions of this package, the objective was scaled by \(N^2\).
Here, \(W\) is a \(M \times M\) weighting matrix and \(\bar{g}\) is a \(M \times 1\) vector of averaged demand- and supply-side moments:
Given a \(\theta\), the first step to computing the objective \(q(\theta)\) is to compute \(\delta(\theta)\) in each market with the following standard contraction:
where \(s\) are the market’s observed shares and \(s(\delta, \theta)\) are calculated market shares. Iteration terminates when the norm of the change in \(\delta(\theta)\) is less than a small number.
With a supply side, marginal costs are then computed according to (7):
With only a demand side, \(\alpha\) can be concentrated out, so \(X = X_1\), \(Z = Z_D\), and \(Y = \delta(\theta)\) recover the full \(\hat{\beta}\) in (15).
Finally, the unobserved product characteristics (i.e., the structural errors),
where \(k_1, k_2, \dotsc, k_{E_D}\) and \(\ell_1, \ell_2, \dotsc, \ell_{E_S}\) index unobserved characteristics that are fixed across \(E_D\) and \(E_S\) dimensions. For example, with \(E_D = 1\) dimension of product fixed effects, \(\xi_{jt} = \xi_j + \Delta\xi_{jt}\).
Small numbers of fixed effects can be estimated with dummy variables in \(X_1\), \(X_3\), \(Z_D\), and \(Z_S\). However, this approach does not scale with high dimensional fixed effects because it requires constructing and inverting an infeasibly large matrix in (15).
Instead, fixed effects are typically absorbed into \(X\), \(Z\), and \(Y(\theta)\) in (15). With one fixed effect, these matrices are simply de-meaned within each level of the fixed effect. Both \(X\) and \(Z\) can be de-meaned just once, but \(Y(\theta)\) must be de-meaned for each new \(\theta\).
This procedure is equivalent to replacing each column of the matrices with residuals from a regression of the column on the fixed effect. The Frish-Waugh-Lovell (FWL) theorem of Frisch and Waugh (1933) and Lovell (1963) guarantees that using these residualized matrices gives the same results as including fixed effects as dummy variables. When \(E_D > 1\) or \(E_S > 1\), the matrices are residualized with more involved algorithms.
Once fixed effects have been absorbed, estimation is as described above with the structural errors \(\Delta\xi\) and \(\Delta\omega\).
In the spirit of Imbens and Lancaster (1994), Petrin (2002), and Berry, Levinsohn, and Pakes (2004), more detailed micro data on individual agent decisions can be used to supplement the standard demand- and supply-side moments \(\bar{g}_D\) and \(\bar{g}_S\) in (11) with an additional \(m = 1, 2, \ldots, M_M\) averaged micro moments, \(\bar{g}_M\), for a total of \(M = M_D + M_S + M_M\) averaged moments:
Each micro moment \(m\) is approximated in a set \(T_m \subset T\) of markets in which its micro data are relevant and then averaged across these markets:
The vector \(\bar{g}_M\) contains sample analogues of micro moment conditions \(E[g_{M,imt}] = 0\) where \(g_{M,imt}\) is typically a function of choice probabilities, data in market \(t\), and a statistic computed from survey data that the moment aims to match.
Mico moments are computed for each \(\theta\) and contribute to the GMM objective \(q(\theta)\) in (10). Their derivatives with respect to \(\theta\) are added as rows to \(\bar{G}\) in (19), and blocks are added to both \(W\) and \(S\) in (23) and (24). The covariance between standard moments and micro moments is assumed to be zero, so these matrices are block-diagonal. The covariance between micro moments \(m\) and \(n\) in \(S\) is set to zero if \(T_{mn} = T_m \cap T_n = \emptyset\) and otherwise is approximated by
Incorporating parameters that measure within nesting group correlation gives the random coefficients nested logit (RCNL) model of Brenkers and Verboven (2006) and Grigolon and Verboven (2014). There are \(h = 1, 2, \dotsc, H\) nesting groups and each product \(j\) is assigned to a group \(h(j)\). The set \(J_{ht} \subset J_t\) denotes the products in group \(h\) and market \(t\).
In the RCNL model, idiosyncratic preferences are partitioned into
where \(\bar{\epsilon}_{ijt}\) is Type I Extreme Value and \(\bar{\epsilon}_{ih(j)t}\) is distributed such that \(\epsilon_{ijt}\) is still Type I Extreme Value.
The nesting parameters, \(\rho\), can either be a \(H \times 1\) vector or a scalar so that for all groups \(\rho_h = \rho\). Letting \(\rho \to 0\) gives the standard BLP model and \(\rho \to 1\) gives division by zero errors. With \(\rho_h \in (0, 1)\), the expression for choice probabilities in (5) becomes more complicated:
In both models, a supply side can still be estimated jointly with demand. Estimation is as described above with a representative agent in each market: \(I_t = 1\) and \(w_1 = 1\).
Counterfactual evaluation, synthetic data simulation, and optimal instrument generation often involve solving for prices implied by the Bertrand first order conditions in (7). Solving this system with Newton’s method is slow and iterating over \(p \leftarrow c + \eta(p)\) may not converge because it is not a contraction.
which, unlike (7), is a contraction. Iteration terminates when the norm of firms’ first order conditions, \(||\Lambda(p)(p - c - \zeta(p))||\), is less than a small number.
If marginal costs depend on quantity, then they also depend on prices and need to be updated during each iteration: \(c_{jt} = c_{jt}(s_{jt}(p))\).
This section uses a series of Jupyter Notebooks to explain how PyBLP can be used to solve example problems, compute post-estimation outputs, and simulate problems. Other examples can be found in the API Documentation.
In this tutorial, we’ll use data from Nevo (2000) to solve the paper’s fake cereal problem. Locations of CSV files that contain the data are in the data module.
We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of Nevo (2000).
where \(\epsilon_{ijt}\) is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that \(U_{i0t} = \epsilon_{i0t}\). This gives us aggregate market shares
A Logit Problem can be created by simply excluding the formulation for the nonlinear parameters, \(X_2\), along with any agent information. In other words, it requires only specifying the linear component of demand.
We’ll set up and solve a simple version of the fake data cereal problem from Nevo (2000). Since we won’t include any demand-side nonlinear characteristics or parameters, we don’t have to worry about configuring an optimization routine.
There are some important reserved variable names:
market_ids are the unique market identifiers which we subscript with \(t\).
shares specifies the market shares which need to be between zero and one, and within a market ID, \(\sum_{j} s_{jt} \leq 1\).
prices are prices \(p_{jt}\). These have some special properties and are always treated as endogenous.
demand_instruments0, demand_instruments1, and so on are numbered demand instruments. These represent only the excluded instruments. The exogenous regressors in \(X_1\) will be automatically added to the set of instruments.
We begin with two steps:
Load the product data which at a minimum consists of market_ids, shares, prices, and at least a single column of demand instruments, demand_instruments0.
Define a Formulation for the \(X_1\) (linear) demand model.
This and all other formulas are similar to R and patsy formulas.
It includes a constant by default. To exclude the constant, specify either a 0 or a -1.
To efficiently include fixed effects, use the absorb option and specify which categorical variables you would like to absorb.
Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear.
The product_data argument of Problem should be a structured array-like object with fields that store data. Product data can be a structured NumPy array, a pandas DataFrame, or other similar objects.
The product data contains market_ids, product_ids, firm_ids, shares, prices, a number of other IDs and product characteristics, and some pre-computed excluded demand_instruments0, demand_instruments1, and so on. The product_ids will be incorporated as fixed effects.
For more information about the instruments and the example data as a whole, refer to the data module.
We can combine the Formulation and product_data to construct a Problem. We pass the Formulation first and the product_data second. We can also display the properties of the problem by typing its name.
The Problem.solve method always returns a ProblemResults class, which can be used to compute post-estimation outputs. See the post estimation tutorial for more information.
[5]:
logit_results=problem.solve()logit_results
[5]:
Problem Results Summary:
==========================================
GMM Objective Clipped Weighting Matrix
Step Value Shares Condition Number
---- --------- ------- ----------------
2 +1.9E+02 0 +5.7E+07
==========================================
Cumulative Statistics:
========================
Computation Objective
Time Evaluations
----------- -----------
00:00:00 2
========================
Beta Estimates (Robust SEs in Parentheses):
==========
prices
----------
-3.0E+01
(+1.0E+00)
==========
Now, we require that \(\epsilon_{jti} = \bar{\epsilon}_{h(j)ti} + (1 - \rho) \bar{\epsilon}_{jti}\) is distributed IID with the Type I Extreme Value (Gumbel) distribution. As \(\rho \rightarrow 1\), all consumers stay within their group. As \(\rho \rightarrow 0\), this collapses to the IIA logit. Note that if we wanted, we could allow \(\rho\) to differ between groups with the notation \(\rho_{h(j)}\).
This gives us aggregate market shares as the product of two logits, the within group logit and the across group logit:
where \(s_{j|h(j)t} = s_{jt} / s_{h(j)t}\) and \(s_{h(j)t}\) is the share of group \(h\) in market \(t\). See Berry (1994) or Cardell (1997) for more information.
Again, the left hand side is data, though the \(\ln s_{j|h(j)t}\) is clearly endogenous which means we must instrument for it. Rather than include \(\ln s_{j|h(j)t}\) along with the linear components of utility, \(X_1\), whenever nesting_ids are included in product_data, \(\rho\) is treated as a nonlinear \(X_2\) parameter. This means that the linear component is given instead by
By including nesting_ids (another reserved name) as a field in product_data, we tell the package to estimate a nested logit model, and we don’t need to change any of the formulas. We show how to construct the category groupings in two different ways:
We put all products in a single nest (only the outside good in the other nest).
We put products into two nests (either mushy or non-mushy).
We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share \(\ln s_{j|h(j)t}\). However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.
We’ll define a function that constructs the additional instrument and solves the nested logit problem. We’ll exclude product ID fixed effects, which are collinear with mushy, and we’ll choose \(\rho = 0.7\) as the initial value at which the optimization routine will start.
Problem Results Summary:
======================================================================================
GMM Objective Projected Reduced Clipped Weighting Matrix Covariance Matrix
Step Value Gradient Norm Hessian Shares Condition Number Condition Number
---- --------- ------------- -------- ------- ---------------- -----------------
2 +2.0E+02 +7.9E-10 +1.1E+04 0 +2.0E+09 +3.0E+04
======================================================================================
Cumulative Statistics:
=================================================
Computation Optimizer Optimization Objective
Time Converged Iterations Evaluations
----------- --------- ------------ -----------
00:00:05 Yes 3 8
=================================================
Rho Estimates (Robust SEs in Parentheses):
==========
All Groups
----------
+9.8E-01
(+1.4E-02)
==========
Beta Estimates (Robust SEs in Parentheses):
==========
prices
----------
-1.2E+00
(+4.0E-01)
==========
When we inspect the Problem, the only changes from the plain logit model is the additional instrument that contributes to \(M_D\) and the inclusion of \(H\), the number of nesting categories.
[8]:
nl_results1.problem
[8]:
Dimensions:
===============================
T N F K1 MD H
--- ---- --- ---- ---- ---
94 2256 5 1 21 1
===============================
Formulations:
==================================
Column Indices: 0
-------------------------- ------
X1: Linear Characteristics prices
==================================
Next, we’ll solve the problem when there are two nests for mushy and non-mushy.
The package is designed to prevent the user from treating the within group share, \(\log s_{j|h(j)t}\), as an exogenous variable. For example, if we were to compute a group_share variable and use the algebraic functionality of Formulation by including the expression log(shares/group_share) in our formula for \(X_1\), the package would raise an error because the package knows that shares should not be included in this
formulation.
To demonstrate why this is a bad idea, we’ll override this feature by calculating \(\log s_{j|h(j)t}\) and including it as an additional variable in \(X_1\). To do so, we’ll first re-define our function for setting up and solving the nested logit problem.
In this tutorial, we’ll use data from Nevo (2000) to solve the paper’s fake cereal problem. Locations of CSV files that contain the data are in the data module.
The random coefficients model extends the plain logit model by allowing for correlated tastes for different product characteristics. In this model (indirect) utility is given by
The main addition is that \(\beta_i = (\alpha_i, \beta_i^\text{ex})\) have individual specific subscripts \(i\).
Conditional on \(\beta_i\), the individual market share follow the same logit form as before. But now we must integrate over heterogeneous individuals to get the aggregate market share:
In general, this integral needs to be calculated numerically. This also means that we can’t directly linearize the model. It is common to re-parametrize the model to separate the aspects of mean utility that all individuals agree on, \(\delta_{jt} = \alpha p_{jt} + x_{jt} \beta^\text{ex} + \xi_{jt}\), from the individual specific heterogeneity, \(\mu_{ijt}(\theta)\). This gives us
Given a guess of \(\theta\) we can solve the system of nonlinear equations for the vector \(\delta\) which equates observed and predicted market share \(s_{jt} = s_{jt}(\delta, \theta)\). Now we can perform a linear IV GMM regression of the form
Application of Random Coefficients Logit with the Fake Cereal Data¶
To include random coefficients we need to add a Formulation for the demand-side nonlinear characteristics \(X_2\).
Just like in the logit case we have the same reserved field names in product_data:
market_ids are the unique market identifiers which we subscript \(t\).
shares specifies the market share which need to be between zero and one, and within a market ID, \(\sum_{j} s_{jt} < 1\).
prices are prices \(p_{jt}\). These have some special properties and are always treated as endogenous.
demand_instruments0, demand_instruments1, and so on are numbered demand instruments. These represent only the excluded instruments. The exogenous regressors in \(X_1\) (of which \(X_2\) is typically a subset) will be automatically added to the set of instruments.
We proceed with the following steps:
Load the productdata which at a minimum consists of market_ids, shares, prices, and at least a single column of demand instruments, demand_instruments0.
Define a Formulation for the \(X_1\) (linear) demand model.
This and all other formulas are similar to R and patsy formulas.
It includes a constant by default. To exclude the constant, specify either a 0 or a -1.
To efficiently include fixed effects, use the absorb option and specify which categorical variables you would like to absorb.
Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear.
Define a Formulation for the \(X_2\) (nonlinear) demand model.
Include only the variables over which we want random coefficients.
Do not absorb or include fixed effects.
It will include a random coefficient on the constant (to capture inside good vs. outside good preference) unless you specify not to with a 0 or a -1.
Define an Integration configuration to solve the market share integral from several available options:
It is required to specify an initial guess of the nonlinear parameters. This serves two primary purposes: speeding up estimation and indicating to the solver through initial values of zero which parameters are restricted to be always zero.
Problem.solve takes an initial guess \(\Sigma_0\) of \(\Sigma\). It guarantees that \(\hat{\Sigma}\) (the estimated parameters) will have the same sparsity structure as \(\Sigma_0\). So any zero element of \(\Sigma\) is restricted to be zero in the solution \(\hat{\Sigma}\). For example, a popular restriction is that \(\Sigma\) is diagonal, this can be achieved by passing a diagonal matrix as
\(\Sigma_0\).
As is common with multivariate normal distributions, \(\Sigma\) is not estimated directly. Rather, its matrix square (Cholesky) root \(L\) is estimated where \(LL' = \Sigma\).
The product_data argument of Problem should be a structured array-like object with fields that store data. Product data can be a structured NumPy array, a pandas DataFrame, or other similar objects.
The product data contains market_ids, product_ids, firm_ids, shares, prices, a number of other firm IDs and product characteristics, and some pre-computed excluded demand_instruments0, demand_instruments1, and so on. The product_ids will be incorporated as fixed effects.
For more information about the instruments and the example data as a whole, refer to the data module.
Setting Up and Solving the Problem Without Demographics¶
Formulations, product data, and an integration configuration are collectively used to initialize a Problem. Once initialized, Problem.solve runs the estimation routine. The arguments to Problem.solve configure how estimation is performed. For example, optimization and iteration arguments configure the optimization and
iteration routines that are used by the outer and inner loops of estimation.
We’ll specify Formulation configurations for \(X_1\), the demand-side linear characteristics, and \(X_2\), the nonlinear characteristics.
The formulation for \(X_1\) consists of prices and fixed effects constructed from product_ids, which we will absorb using absorb argument of Formulation.
If we were interested in reporting estimates for each fixed effect, we could replace the formulation for \(X_1\) with Formulation('prices+C(product_ids)').
Because sugar, mushy, and the constant are collinear with product_ids, we can include them in \(X_2\) but not in \(X_1\).
We also need to specify an Integration configuration. We consider two alternatives:
Monte Carlo draws: we simulate 50 individuals from a random normal distribution. This is just for simplicity. In practice quasi-Monte Carlo sequences such as Halton draws are preferable, and there should generally be many more simulated individuals than just 50.
Product rules: we construct nodes and weights according to a product rule that exactly integrates polynomials of degree \(5 \times 2 - 1 = 9\) or less.
Dimensions:
=============================================
T N F I K1 K2 MD ED
--- ---- --- ----- ---- ---- ---- ----
94 2256 5 58750 1 4 20 1
=============================================
Formulations:
===========================================================
Column Indices: 0 1 2 3
----------------------------- ------ ------ ----- -----
X1: Linear Characteristics prices
X2: Nonlinear Characteristics 1 prices sugar mushy
===========================================================
As an illustration of how to configure the optimization routine, we’ll use a simpler, non-default Optimization configuration that doesn’t support parameter bounds, and use a relatively loose tolerance so the problems are solved quickly. In practice along with more integration draws, it’s a good idea to use a tighter termination tolerance.
[8]:
bfgs=pyblp.Optimization('bfgs',{'gtol':1e-4})bfgs
[8]:
Configured to optimize using the BFGS algorithm implemented in SciPy with analytic gradients and options {gtol: +1.0E-04}.
We estimate three versions of the model:
An unrestricted covariance matrix for random tastes using Monte Carlo integration.
An unrestricted covariance matrix for random tastes using the product rule.
A restricted diagonal matrix for random tastes using Monte Carlo Integration.
Notice that the only thing that changes when we estimate the restricted covariance is our initial guess of \(\Sigma_0\). The upper diagonal in this initial guess is ignored because we are optimizing over the lower-triangular Cholesky root of \(\Sigma\).
We see that all three models give similar estimates of the price coefficient \(\hat{\alpha} \approx -30\). Note a few of the estimated terms on the diagonal of \(\Sigma\) are negative. Since the diagonal consists of standard deviations, negative values are unrealistic. When using another optimization routine that supports bounds (like the default L-BFGS-B routine), these diagonal elements are by default bounded from below by zero.
To add demographic data we need to make two changes:
We need to load agent_data, which for this cereal problem contains pre-computed Monte Carlo draws and demographics.
We need to add an agent_formulation to the model.
The agentdata has several reserved column names.
market_ids are the index that link the agentdata to the market_ids in productdata.
weights are the weights \(w_{it}\) attached to each agent. In each market, these should sum to one so that \(\sum_i w_{it} = 1\). It is often the case the \(w_{it} = 1 / I_t\) where \(I_t\) is the number of agents in market \(t\), so that each agent gets equal weight. Other possibilities include quadrature nodes and weights.
nodes0, nodes1, and so on are the nodes at which the unobserved agent tastes \(\mu_{ijt}\) are evaluated. The nodes should be labeled from \(0, \ldots, K_2 - 1\) where \(K_2\) is the number of random coefficients.
Other fields are the realizations of the demographics \(d\) themselves.
The agentformulation tells us which columns of demographic information to interact with \(X_2\).
[13]:
agent_formulation=pyblp.Formulation('0 + income + income_squared + age + child')agent_formulation
[13]:
income + income_squared + age + child
This tells us to include demographic realizations for income, income_squared, age, and the presence of children, child, but to ignore other possible demographics when interacting demographics with \(X_2\). We should also generally exclude the constant from the demographic formula.
Now we configure the Problem to include the agent_formulation and agent_data, which follow the product_formulations and product_data.
When we display the class, it lists the demographic interactions in the table of formulations and reports \(D = 4\), the dimension of the demographic draws.
Dimensions:
=================================================
T N F I K1 K2 D MD ED
--- ---- --- ---- ---- ---- --- ---- ----
94 2256 5 1880 1 4 4 20 1
=================================================
Formulations:
===================================================================
Column Indices: 0 1 2 3
----------------------------- ------ -------------- ----- -----
X1: Linear Characteristics prices
X2: Nonlinear Characteristics 1 prices sugar mushy
d: Demographics income income_squared age child
===================================================================
The initialized problem can be solved with Problem.solve. We’ll use the same starting values as Nevo (2000). By passing a diagonal matrix as starting values for \(\Sigma\), we’re choosing to ignore covariance terms. Similarly, zeros in the starting values for \(\Pi\) mean that those parameters will be fixed at zero.
To replicate common estimates, we’ll use the non-default BFGS optimization routine (with a slightly tighter tolerance to avoid getting stuck at a spurious local minimum), and we’ll configure Problem.solve to use 1-step GMM instead of 2-step GMM.
Results are similar to those in the original paper with a (scaled) objective value of \(q(\hat{\theta}) = 4.65\) and a price coefficient of \(\hat{\alpha} = -62.7\).
Because the interactions between price, income, and income_squared are potentially collinear, we might worry that \(\hat{\Pi}_{21} = 588\) and \(\hat{\Pi}_{22} = -30.2\) are pushing the price coefficient in opposite directions. Both are large in magnitude but statistically insignficant. One way of dealing with this is to restrict \(\Pi_{22} = 0\).
There are two ways we can do this:
Change the initial \(\Pi_0\) values to make this term zero.
Now we’ll drop both income_squared and the corresponding column in \(\Pi_0\).
[17]:
restricted_formulation=pyblp.Formulation('0 + income + age + child')deleted_pi=np.delete(initial_pi,1,axis=1)restricted_problem=pyblp.Problem(product_formulations,product_data,restricted_formulation,agent_data)restricted_problem.solve(initial_sigma,deleted_pi,optimization=tighter_bfgs,method='1s')
The parameter estimates and standard errors are identical for both approaches. Based on the number of fixed point iterations, there is some evidence that the solver took a slightly different path for each problem, but both restricted problems arrived at identical answers. The \(\hat{\Pi}_{12}\) interaction term is still insignificant.
The product_data contains market IDs, product IDs, firm IDs, shares, prices, a number of product characteristics, and instruments. The product IDs are called clustering_ids because they will be used to compute clustered standard errors. For more information about the instruments and the example data as a whole, refer to the data module.
The agent_data contains market IDs, integration weights \(w_{it}\), integration nodes \(\nu_{it}\), and demographics \(d_{it}\). Here we use the \(I_t = 200\) importance sampling weights and nodes from the original paper.
In non-example problems, it is usually a better idea to use many more draws, or a more sophisticated Integration configuration such as sparse grid quadrature.
Unlike the fake cereal problem, we won’t absorb any fixed effects in the automobile problem, so the linear part of demand \(X_1\) has more components. We also need to specify a formula for the random coefficients \(X_2\), including a random coefficient on the constant, which captures correlation among all inside goods.
The primary new addition to the model relative to the fake cereal problem is that we add a supply side formula for product characteristics that contribute to marginal costs, \(X_3\). The patsy-style formulas support functions of regressors such as the log function used below.
We stack the three product formulations in order: \(X_1\), \(X_2\), and \(X_3\).
[4]:
product_formulations=(pyblp.Formulation('1 + hpwt + air + mpd + space'),pyblp.Formulation('1 + prices + hpwt + air + mpd + space'),pyblp.Formulation('1 + log(hpwt) + air + log(mpg) + log(space) + trend'))product_formulations
The original specification for the automobile problem includes the term \(\log(y_i - p_j)\), in which \(y\) is income and \(p\) are prices. Instead of including this term, which gives rise to a host of numerical problems, we’ll follow Berry, Levinsohn, and Pakes (1999) and use its first-order linear approximation, \(p_j / y_i\).
The agent formulation for demographics, \(d\), includes a column of \(1 / y_i\) values, which we’ll interact with \(p_j\). To do this, we will treat draws of \(y_i\) as demographic variables.
As in the cereal example, the Problem can be constructed by combining the product_formulations, product_data, agent_formulation, and agent_data. We’ll also choose the functional form of marginal costs \(c_{jt}\). A linear marginal cost specification is the default setting, so we’ll need to use the costs_type argument of Problem to employ the log-linear specification used by
Berry, Levinsohn, and Pakes (1995).
Dimensions:
=======================================================
T N F I K1 K2 K3 D MD MS
--- ---- --- ---- ---- ---- ---- --- ---- ----
20 2217 26 4000 5 6 6 1 13 18
=======================================================
Formulations:
=====================================================================================
Column Indices: 0 1 2 3 4 5
----------------------------- -------- --------- ---- -------- ---------- -----
X1: Linear Characteristics 1 hpwt air mpd space
X2: Nonlinear Characteristics 1 prices hpwt air mpd space
X3: Log Cost Characteristics 1 log(hpwt) air log(mpg) log(space) trend
d: Demographics 1/income
=====================================================================================
The problem outputs a table of dimensions:
\(T\) denotes the number of markets.
\(N\) is the length of the dataset (the number of products across all markets).
\(F\) denotes the number of firms.
\(I = \sum_t I_t\) is the total number of agents across all markets (200 draws per market times 20 markets).
\(K_1\) is the number of linear demand characteristics.
\(K_2\) is the number of nonlinear demand characteristics.
\(K_3\) is the number of linear supply characteristics.
\(D\) is the number of demographic variables.
\(M_D\) is the number of demand instruments, including exogenous regressors.
\(M_S\) is the number of supply instruments, including exogenous regressors.
The formulations table describes all four formulas for demand-side linear characteristics, demand-side nonlinear characteristics, supply-side characteristics, and demographics.
Choosing \(\Sigma\) and \(\Pi\) starting values, \(\Sigma_0\) and \(\Pi_0\).
Potentially choosing bounds for \(\Sigma\) and \(\Pi\).
The decisions we will use are:
Use published estimates as our starting values in \(\Sigma_0\).
Interact the inverse of income, \(1 / y_i\), only with prices, and use the published estimate on \(\log(y_i - p_j)\) as our starting value for \(\alpha\) in \(\Pi_0\).
Bound \(\Sigma_0\) to be positive since it is a diagonal matrix where the diagonal consists of standard deviations.
When using a routine that supports bounds, it’s usually a good idea to set your own more bounds so that the routine doesn’t try out large parameter values that create numerical issues.
Note that there are only 5 nonzeros on the diagonal of \(\Sigma\), which means that we only need 5 columns of integration nodes to integrate over these 5 dimensions of unobserved heterogeneity. Indeed, agent_data contains exactly 5 columns of nodes. If we were to ignore the \(\log(y_i - p_j)\) term (by not configuring \(\Pi\)) and include a term on prices in \(\Sigma\) instead, we would have needed 6 columns of integration nodes in our agent_data.
A downside of the log-linear marginal costs specification is that nonpositive estimated marginal costs can create problems for the optimization routine when computing \(\log c(\hat{\theta})\). We’ll use the costs_bounds argument to bound marginal costs from below by a small number.
Finally, as in the original paper, we’ll use W_type and se_type to cluster by product IDs, which were specified as clustering_ids in product_data, and set initial_update=True to update the initial GMM weighting matrix and the mean utility at the starting parameter values.
As in the fake cereal tutorial, we’ll first solve the fake cereal problem from Nevo (2000). We load the fake data and estimate the model as in the previous tutorial. We output the setup of the model to confirm we have correctly configured the Problem
[2]:
product_data=pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)agent_data=pd.read_csv(pyblp.data.NEVO_AGENTS_LOCATION)product_formulations=(pyblp.Formulation('0 + prices',absorb='C(product_ids)'),pyblp.Formulation('1 + prices + sugar + mushy'))agent_formulation=pyblp.Formulation('0 + income + income_squared + age + child')problem=pyblp.Problem(product_formulations,product_data,agent_formulation,agent_data)problem
[2]:
Dimensions:
=================================================
T N F I K1 K2 D MD ED
--- ---- --- ---- ---- ---- --- ---- ----
94 2256 5 1880 1 4 4 20 1
=================================================
Formulations:
===================================================================
Column Indices: 0 1 2 3
----------------------------- ------ -------------- ----- -----
X1: Linear Characteristics prices
X2: Nonlinear Characteristics 1 prices sugar mushy
d: Demographics income income_squared age child
===================================================================
We’ll solve the problem in the same way as before. The Problem.solve method returns a ProblemResults class, which displays basic estimation results. The results that are displayed are simply formatted information extracted from various class attributes such as ProblemResults.sigma and
ProblemResults.sigma_se.
Post-estimation outputs are computed for each market and stacked. We’ll use matplotlib functions to display the matrices associated with a single market.
The diagonal of the first image consists of own elasticities and the diagonal of the second image consists of diversion ratios to the outside good. As one might expect, own price elasticities are large and negative while cross-price elasticities are positive but much smaller.
An alternative to summarizing full elasticity matrices is to use ProblemResults.compute_aggregate_elasticities to estimate aggregate elasticities of demand, \(E\), in each market, which reflect the change in total sales under a proportional sales tax of some factor.
Since demand for an entire product category is generally less elastic than the average elasticity of individual products, mean own elasticities are generally larger in magnitude than aggregate elasticities.
[9]:
plt.hist([means.flatten(),aggregates.flatten()],color=['red','blue'],bins=50);plt.legend(['Mean Own Elasticities','Aggregate Elasticities']);
To compute marginal costs, \(c\), the product_data passed to Problem must have had a firm_ids field. Since we included firm IDs when configuring the problem, we can use ProblemResults.compute_costs.
Other methods that compute supply-side outputs often compute marginal costs themselves. For example, ProblemResults.compute_markups will compute marginal costs when estimating markups, \(\mathscr{M}\), but computation can be sped up if we just use our pre-computed values.
Before computing post-merger outputs, we’ll supplement our pre-merger markups with some other outputs. We’ll compute Herfindahl-Hirschman Indices, \(\text{HHI}\), with ProblemResults.compute_hhi; population-normalized gross expected profits, \(\pi\), with ProblemResults.compute_profits; and
population-normalized consumer surpluses, \(\text{CS}\), with ProblemResults.compute_consumer_surpluses.
We can use ProblemResults.compute_approximate_prices or ProblemResults.compute_prices to estimate post-merger prices. The first method, which is discussed, for example, in Nevo (1997), assumes that shares and their price derivatives are unaffected by
the merger. The second method does not make these assumptions and iterates over the \(\zeta\)-markup equation from Morrow and Skerlos (2011) to solve the full system of \(J_t\) equations and \(J_t\) unknowns in each market \(t\). We’ll use the latter, since it is fast enough for this example problem.
Post-estimation outputs can be informative, but they don’t mean much without a sense sample-to-sample variability. One way to estimate confidence intervals for post-estimation outputs is with a standard bootstrap procedure:
Construct a large number of bootstrap samples by sampling with replacement from the original product data.
Initialize and solve a Problem for each bootstrap sample.
Compute the desired post-estimation output for each bootstrapped ProblemResults and from the resulting empirical distribution, construct boostrap confidence intervals.
Although appealing because of its simplicity, the computational resources required for this procedure are often prohibitively expensive. Furthermore, human oversight of the optimization routine is often required to determine whether the routine ran into any problems and if it successfully converged. Human oversight of estimation for each bootstrapped problem is usually not feasible.
A more reasonable alternative is a parametric bootstrap procedure:
Construct a large number of draws from the estimated joint distribution of parameters.
Compute the implied mean utility, \(\delta\), and shares, \(s\), for each draw. If a supply side was estimated, also computed the implied marginal costs, \(c\), and prices, \(p\).
Compute the desired post-estimation output under each of these parametric bootstrap samples. Again, from the resulting empirical distribution, construct boostrap confidence intervals.
Compared to the standard bootstrap procedure, the parametric bootstrap requires far fewer computational resources, and is simple enough to not require human oversight of each bootstrap iteration. The primary complication to this procedure is that when supply is estimated, equilibrium prices and shares need to be computed for each parametric bootstrap sample by iterating over the \(\zeta\)-markup equation from Morrow and Skerlos (2011).
Although nontrivial, this fixed point iteration problem is much less demanding than the full optimization routine required to solve the BLP problem from the start.
An empirical distribution of results computed according to this parametric bootstrap procedure can be created with the ProblemResults.bootstrap method, which returns a BootstrappedResults class that can be used just like ProblemResults to compute various post-estimation outputs.
The difference is that BootstrappedResults methods return arrays with an extra first dimension, along which bootstrapped results are stacked.
We’ll construct 90% parametric bootstrap confidence intervals for estimated mean own elasticities in each market of the fake cereal problem. Usually, bootstrapped confidence intervals should be based on thousands of draws, but we’ll only use a few for the sake of speed in this example.
Bootstrapped Results Summary:
======================
Computation Bootstrap
Time Draws
----------- ---------
00:00:38 100
======================
[21]:
bounds=np.percentile(bootstrapped_results.extract_diagonal_means(bootstrapped_results.compute_elasticities()),q=[10,90],axis=0)table=pd.DataFrame(index=problem.unique_market_ids,data={'Lower Bound':bounds[0].flatten(),'Mean Own Elasticity':aggregates.flatten(),'Upper Bound':bounds[1].flatten()})table.round(2).head()
Given a consistent estimate of \(\theta\), we may want to compute the optimal instruments of Chamberlain (1987) and use them to re-solve the problem. Optimal instruments have been shown, for example, by Reynaert and Verboven (2014), to reduce bias, improve efficiency, and enhance stability of BLP estimates.
The ProblemResults.compute_optimal_instruments method computes the expected Jacobians that comprise the optimal instruments by integrating over the density of \(\xi\) (and \(\omega\) if a supply side was estimated). By default, the method approximates this integral by averaging over the Jacobian realizations computed under draws from the asymptotic normal distribution
of the error terms. Since this process is computationally expensive and often doesn’t make much of a difference, we’ll use method='approximate' in this example to simply evaluate the Jacobians at the expected value of \(\xi\), zero.
Optimal Instrument Results Summary:
=======================
Computation Error Term
Time Draws
----------- ----------
00:00:01 1
=======================
We can use the OptimalInstrumentResults.to_problem method to re-create the fake cereal problem with the estimated optimal excluded instruments.
Before configuring and solving a problem with real data, it may be a good idea to perform Monte Carlo analysis on simulated data to verify that it is possible to accurately estimate model parameters. For example, before configuring and solving the example problems in the prior tutorials, it may have been a good idea to simulate data according to the assumed models of supply and demand. During such Monte Carlo anaysis, the data would only be used to determine sample sizes and perhaps to choose
reasonable true parameters.
Simulations are configured with the Simulation class, which requires many of the same inputs as Problem. The two main differences are:
Variables in formulations that cannot be loaded from product_data or agent_data will be drawn from independent uniform distributions.
True parameters and the distribution of unobserved product characteristics are specified.
First, we’ll use build_id_data to build market and firm IDs for a model in which there are \(T = 50\) markets, and in each market \(t\), a total of \(J_t = 20\) products produced by \(F = 10\) firms.
[2]:
id_data=pyblp.build_id_data(T=50,J=20,F=10)
Next, we’ll create an Integration configuration to build agent data according to a Gauss-Hermite product rule that exactly integrates polynomials of degree \(2 \times 9 - 1 = 17\) or less.
Configured to construct nodes and weights according to the level-9 Gauss-Hermite product rule with options {}.
We’ll then pass these data to Simulation. We’ll use Formulation configurations to create an \(X_1\) that consists of a constant, prices, and an exogenous characteristic; an \(X_2\) that consists only of the same exogenous characteristic; and an \(X_3\) that consists of the common exogenous characteristic and a cost-shifter.
The Simulation can be further configured with other arguments that determine how unobserved product characteristics are simulated and how marginal costs are specified.
At this stage, simulated variables are not consistent with true parameters, so we still need to solve the simulation with Simulation.replace_endogenous. This method replaced simulated prices and market shares with values that are consistent with the true parameters. Just like ProblemResults.compute_prices, to
do so it iterates over the \(\zeta\)-markup equation from Morrow and Skerlos (2011).
Simulation Results Summary:
=====================================
Computation Fixed Point Contraction
Time Iterations Evaluations
----------- ----------- -----------
00:00:01 721 721
=====================================
Now, we can try to recover the true parameters by creating and solving a Problem.
The convenience method SimulationResults.to_problem constructs some basic “sums of characteristics” BLP instruments that are functions of all exogenous numerical variables in the problem. In this example, excluded demand-side instruments are the cost-shifter z and traditional BLP instruments constructed from x. Excluded supply-side instruments are traditional BLP instruments constructed from x
and z.
[6]:
problem=simulation_results.to_problem()problem
[6]:
Dimensions:
=================================================
T N F I K1 K2 K3 MD MS
--- ---- --- --- ---- ---- ---- ---- ----
50 1000 10 450 3 1 2 5 6
=================================================
Formulations:
=================================================
Column Indices: 0 1 2
------------------------------- --- ------ ---
X1: Linear Characteristics 1 prices x
X2: Nonlinear Characteristics x
X3: Linear Cost Characteristics x z
=================================================
We’ll choose starting values that are half the true parameters so that the optimization routine has to do some work. Note that since we’re jointly estimating the supply side, we need to provide an initial value for the linear coefficient on prices because this parameter cannot be concentrated out of the problem (unlike linear coefficients on exogenous characteristics).
In addition to checking that the configuration for a model based on actual data makes sense, the Simulation class can also be a helpful tool for better understanding under what general conditions BLP models can be accurately estimated. Simulations are also used extensively in pyblp’s test suite.
The majority of the package consists of classes, which compartmentalize different aspects of the BLP model. There are some convenience functions as well.
Various components of the package require configurations for how to approximate integrals, solve fixed point problems, and solve optimimzation problems. Such configurations are specified with the following classes.
Test the validity of model restrictions with the Wald test.
In addition to class attributes, other post-estimation outputs can be estimated market-by-market with the following methods, which each return an array.
Approximate equilibrium prices after firm or cost changes, \(p^*\), under the assumption that shares and their price derivatives are unaffected by such changes.
A parametric bootstrap can be used, for example, to compute standard errors for the above post-estimation outputs. The following method returns a results class with all of the above methods, which returns a distribution of post-estimation outputs corresponding to different bootstrapped samples.
The following class allows for evaluation of more complicated counterfactuals than is possible with ProblemResults methods, or for simulation of synthetic data from scratch.
Replace simulated prices and market shares with equilibrium values that are consistent with true parameters.
A less common way to solve the simulation is to assume simulated prices and shares represent and equilibrium and to replace exogenous variables instead.
Product and agent data that are passed or constructed by Problem and Simulation are structured internally into classes with field names that more closely resemble BLP notation. Although these structured data classes are not directly constructable, they can be accessed with Problem and Simulation class attributes. It can be helpful to compare these structured data classes with the data or configurations used to create them.
In addition to classes and functions, there are also two modules that can be used to configure global package options and locate example data that comes with the package.
Failed to invert a Jacobian of shares with respect to \(\xi\) when computing the Jacobian of \(\xi\) (equivalently, of \(\delta\)) with respect to \(\theta\).
This page contains a full list of references cited in the documentation, including the original work of Berry, Levinsohn, and Pakes (1995). If you use PyBLP in your research, we ask that you also cite the below Conlon and Gortmaker (2020), which describes the advances implemented in the package.
Copyright 2020 Jeff Gortmaker and Christopher T. Conlon
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Testing is done with the tox automation tool, which runs a pytest-backed test suite in the tests/ directory. This FAQ contains some useful information about how to use tox on Windows.
In addition to the installation requirements for the package itself, running tests and building documentation requires additional packages specified by the tests and docs extras in setup.py, along with any other explicitly specified deps in tox.ini.
The full suite of tests also requires installation of the following software:
Artleys Knitro version 10.3 or newer: testing optimization routines.
MATLAB: comparing sparse grids with those created by the function nwspgr created by Florian Heiss and Viktor Winschel, which must be included in a directory on the MATLAB path.
If software is not installed, its associated tests will be skipped. Additionally, some tests that require support for extended precision will be skipped if on the platform running the tests, numpy.longdouble has the same precision as numpy.float64. This tends to be the case on Windows.
Defined in tox.ini are environments that test the package under different python versions, check types, enforce style guidelines, verify the integrity of the documentation, and release the package. First, tox should be installed on top of an Anaconda installation. The following command can be run in the top-level pyblp directory to run all testing environments:
tox
You can choose to run only one environment, such as the one that builds the documentation, with the -e flag:
Fixtures, which are defined in tests.conftest, configure the testing environment and simulate problems according to a range of specifications.
Most BLP-specific tests in tests.test_blp verify properties about results obtained by solving the simulated problems under various parameterizations. Examples include:
Reasonable formulations of problems should give rise to estimated parameters that are close to their true values.
Cosmetic changes such as the number of processes should not change estimates.
Post-estimation outputs should satisfy certain properties.
Optimization routines should behave as expected.
Derivatives computed with finite differences should approach analytic derivatives.
Tests of generic utilities in tests.test_formulation, tests.test_integration, tests.test_iteration, and tests.test_optimization verify that matrix formulation, integral approximation, fixed point iteration, and nonlinear optimization all work as expected. Example include:
Nonlinear formulas give rise to expected matrices and derivatives.
Gauss-Hermite integrals are better approximated with quadrature based on Gauss-Hermite rules than with Monte Carlo integration.
To solve a fixed point iteration problem for which it was developed, SQUAREM requires fewer fixed point evaluations than does simple iteration.
All optimization routines manage to solve a well-known optimization problem under different parameterizations.
PKbsPqWFuuBpyblp-stable/_downloads/49151acd94b6ab912f7828bd05805d4a/blp.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Random Coefficients Logit Tutorial with the Automobile Data"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.10.0'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pyblp\n",
"import numpy as np\n",
"import pandas as pd \n",
"\n",
"pyblp.options.digits = 2\n",
"pyblp.options.verbose = False\n",
"pyblp.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we'll use data from [Berry, Levinsohn, and Pakes (1995)](https://pyblp.readthedocs.io/en/stable/references.html#berry-levinsohn-and-pakes-1995) to solve the paper's automobile problem.\n",
"\n",
"\n",
"## Application of Random Coefficients Logit with the Automobile Data\n",
"\n",
"This tutorial is similar to the [fake cereal tutorial](nevo.ipynb), but exhibits some other features of pyblp:\n",
"\n",
"- Incorporating a supply side into demand estimation.\n",
"- Allowing for simple price-income demographic effects.\n",
"- Calculating clustered standard errors.\n",
"\n",
"\n",
"### Loading the Data\n",
"\n",
"We'll use [pandas](https://pandas.pydata.org/) to load two sets of data:\n",
"\n",
"1. `product_data`, which contains prices, shares, and other product characteristics.\n",
"2. `agent_data`, which contains draws from the distribution of heterogeneity."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
"
],
"text/plain": [
" market_ids weights nodes0 nodes1 nodes2 nodes3 nodes4 \\\n",
"0 1971 0.000543 1.192188 0.478777 0.980830 -0.824410 2.473301 \n",
"1 1971 0.000723 1.497074 -2.026204 -1.741316 1.412568 -0.747468 \n",
"2 1971 0.000544 1.438081 0.813280 -1.749974 -1.203509 0.049558 \n",
"3 1971 0.000701 1.768655 -0.177453 0.286602 0.391517 0.683669 \n",
"4 1971 0.000549 0.849970 -0.135337 0.735920 1.036247 -1.143436 \n",
"\n",
" income \n",
"0 109.560369 \n",
"1 45.457314 \n",
"2 127.146548 \n",
"3 22.604045 \n",
"4 170.226032 "
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_data = pd.read_csv(pyblp.data.BLP_AGENTS_LOCATION)\n",
"agent_data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting up the Problem\n",
"\n",
"Unlike the fake cereal problem, we won't absorb any fixed effects in the automobile problem, so the linear part of demand $X_1$ has more components. We also need to specify a formula for the random coefficients $X_2$, including a random coefficient on the constant, which captures correlation among all inside goods.\n",
"\n",
"The primary new addition to the model relative to the fake cereal problem is that we add a supply side formula for product characteristics that contribute to marginal costs, $X_3$. The [patsy](https://patsy.readthedocs.io/en/stable/)-style formulas support functions of regressors such as the `log` function used below.\n",
"\n",
"We stack the three product formulations in order: $X_1$, $X_2$, and $X_3$."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(1 + hpwt + air + mpd + space,\n",
" 1 + prices + hpwt + air + mpd + space,\n",
" 1 + log(hpwt) + air + log(mpg) + log(space) + trend)"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"product_formulations = (\n",
" pyblp.Formulation('1 + hpwt + air + mpd + space'),\n",
" pyblp.Formulation('1 + prices + hpwt + air + mpd + space'),\n",
" pyblp.Formulation('1 + log(hpwt) + air + log(mpg) + log(space) + trend')\n",
")\n",
"product_formulations"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The original specification for the automobile problem includes the term $\\log(y_i - p_j)$, in which $y$ is income and $p$ are prices. Instead of including this term, which gives rise to a host of numerical problems, we'll follow [Berry, Levinsohn, and Pakes (1999)](https://pyblp.readthedocs.io/en/stable/references.html#berry-levinsohn-and-pakes-1999) and use its first-order linear approximation, $p_j / y_i$. \n",
"\n",
"The agent formulation for demographics, $d$, includes a column of $1 / y_i$ values, which we'll interact with $p_j$. To do this, we will treat draws of $y_i$ as demographic variables."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"I(1 / income)"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_formulation = pyblp.Formulation('0 + I(1 / income)')\n",
"agent_formulation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As in the cereal example, the [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) can be constructed by combining the `product_formulations`, `product_data`, `agent_formulation`, and `agent_data`. We'll also choose the functional form of marginal costs $c_{jt}$. A linear marginal cost specification is the default setting, so we'll need to use the `costs_type` argument of [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) to employ the log-linear specification used by [Berry, Levinsohn, and Pakes (1995)](https://pyblp.readthedocs.io/en/stable/references.html#berry-levinsohn-and-pakes-1995)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dimensions:\n",
"=======================================================\n",
" T N F I K1 K2 K3 D MD MS \n",
"--- ---- --- ---- ---- ---- ---- --- ---- ----\n",
"20 2217 26 4000 5 6 6 1 13 18 \n",
"=======================================================\n",
"\n",
"Formulations:\n",
"=====================================================================================\n",
" Column Indices: 0 1 2 3 4 5 \n",
"----------------------------- -------- --------- ---- -------- ---------- -----\n",
" X1: Linear Characteristics 1 hpwt air mpd space \n",
"X2: Nonlinear Characteristics 1 prices hpwt air mpd space\n",
"X3: Log Cost Characteristics 1 log(hpwt) air log(mpg) log(space) trend\n",
" d: Demographics 1/income \n",
"====================================================================================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"problem = pyblp.Problem(product_formulations, product_data, agent_formulation, agent_data, costs_type='log')\n",
"problem"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The problem outputs a table of dimensions:\n",
"\n",
"- $T$ denotes the number of markets.\n",
"- $N$ is the length of the dataset (the number of products across all markets).\n",
"- $F$ denotes the number of firms.\n",
"- $I = \\sum_t I_t$ is the total number of agents across all markets (200 draws per market times 20 markets).\n",
"- $K_1$ is the number of linear demand characteristics.\n",
"- $K_2$ is the number of nonlinear demand characteristics.\n",
"- $K_3$ is the number of linear supply characteristics.\n",
"- $D$ is the number of demographic variables.\n",
"- $M_D$ is the number of demand instruments, including exogenous regressors.\n",
"- $M_S$ is the number of supply instruments, including exogenous regressors.\n",
"\n",
"The formulations table describes all four formulas for demand-side linear characteristics, demand-side nonlinear characteristics, supply-side characteristics, and demographics."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Solving the Problem"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The only remaining decisions are:\n",
"\n",
"- Choosing $\\Sigma$ and $\\Pi$ starting values, $\\Sigma_0$ and $\\Pi_0$.\n",
"- Potentially choosing bounds for $\\Sigma$ and $\\Pi$.\n",
"\n",
"The decisions we will use are:\n",
"\n",
"- Use published estimates as our starting values in $\\Sigma_0$.\n",
"- Interact the inverse of income, $1 / y_i$, only with prices, and use the published estimate on $\\log(y_i - p_j)$ as our starting value for $\\alpha$ in $\\Pi_0$.\n",
"- Bound $\\Sigma_0$ to be positive since it is a diagonal matrix where the diagonal consists of standard deviations.\n",
"\n",
"When using a routine that supports bounds, it's usually a good idea to set your own more bounds so that the routine doesn't try out large parameter values that create numerical issues."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"initial_sigma = np.diag([3.612, 0, 4.628, 1.818, 1.050, 2.056])\n",
"initial_pi = np.c_[[0, -43.501, 0, 0, 0, 0]]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that there are only 5 nonzeros on the diagonal of $\\Sigma$, which means that we only need 5 columns of integration nodes to integrate over these 5 dimensions of unobserved heterogeneity. Indeed, `agent_data` contains exactly 5 columns of nodes. If we were to ignore the $\\log(y_i - p_j)$ term (by not configuring $\\Pi$) and include a term on prices in $\\Sigma$ instead, we would have needed 6 columns of integration nodes in our `agent_data`.\n",
"\n",
"A downside of the log-linear marginal costs specification is that nonpositive estimated marginal costs can create problems for the optimization routine when computing $\\log c(\\hat{\\theta})$. We'll use the `costs_bounds` argument to bound marginal costs from below by a small number. \n",
"\n",
"Finally, as in the original paper, we'll use `W_type` and `se_type` to cluster by product IDs, which were specified as `clustering_ids` in `product_data`, and set `initial_update=True` to update the initial GMM weighting matrix and the mean utility at the starting parameter values."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"=======================================================================================================================\n",
"GMM Objective Projected Reduced Hessian Reduced Hessian Clipped Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Gradient Norm Min Eigenvalue Max Eigenvalue Shares Costs Condition Number Condition Number \n",
"---- --------- ------------- --------------- --------------- ------- ------- ---------------- -----------------\n",
" 2 +5.0E+02 +3.6E-08 +4.9E-01 +5.1E+02 0 0 +4.2E+09 +3.8E+08 \n",
"=======================================================================================================================\n",
"\n",
"Cumulative Statistics:\n",
"===========================================================================\n",
"Computation Optimizer Optimization Objective Fixed Point Contraction\n",
" Time Converged Iterations Evaluations Iterations Evaluations\n",
"----------- --------- ------------ ----------- ----------- -----------\n",
" 00:28:13 Yes 58 94 28678 87850 \n",
"===========================================================================\n",
"\n",
"Nonlinear Coefficient Estimates (Robust SEs Adjusted for 999 Clusters in Parentheses):\n",
"===================================================================================================\n",
"Sigma: 1 prices hpwt air mpd space | Pi: 1/income \n",
"------ ---------- -------- ---------- ---------- ---------- ---------- | ------ ----------\n",
" 1 +2.0E+00 | 1 +0.0E+00 \n",
" (+6.1E+00) | \n",
" | \n",
"prices +0.0E+00 +0.0E+00 | prices -4.5E+01 \n",
" | (+9.2E+00)\n",
" | \n",
" hpwt +0.0E+00 +0.0E+00 +6.1E+00 | hpwt +0.0E+00 \n",
" (+2.2E+00) | \n",
" | \n",
" air +0.0E+00 +0.0E+00 +0.0E+00 +4.0E+00 | air +0.0E+00 \n",
" (+2.1E+00) | \n",
" | \n",
" mpd +0.0E+00 +0.0E+00 +0.0E+00 +0.0E+00 +2.5E-01 | mpd +0.0E+00 \n",
" (+5.5E-01) | \n",
" | \n",
"space +0.0E+00 +0.0E+00 +0.0E+00 +0.0E+00 +0.0E+00 +1.9E+00 | space +0.0E+00 \n",
" (+1.1E+00) | \n",
"===================================================================================================\n",
"\n",
"Beta Estimates (Robust SEs Adjusted for 999 Clusters in Parentheses):\n",
"==========================================================\n",
" 1 hpwt air mpd space \n",
"---------- ---------- ---------- ---------- ----------\n",
" -7.3E+00 +3.5E+00 -1.0E+00 +4.2E-01 +4.2E+00 \n",
"(+2.8E+00) (+1.4E+00) (+2.1E+00) (+2.5E-01) (+6.6E-01)\n",
"==========================================================\n",
"\n",
"Gamma Estimates (Robust SEs Adjusted for 999 Clusters in Parentheses):\n",
"======================================================================\n",
" 1 log(hpwt) air log(mpg) log(space) trend \n",
"---------- ---------- ---------- ---------- ---------- ----------\n",
" +2.8E+00 +9.0E-01 +4.2E-01 -5.2E-01 -2.6E-01 +2.7E-02 \n",
"(+1.2E-01) (+7.2E-02) (+8.7E-02) (+7.3E-02) (+2.1E-01) (+3.1E-03)\n",
"======================================================================"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"results = problem.solve(\n",
" initial_sigma,\n",
" initial_pi,\n",
" costs_bounds=(0.001, None),\n",
" W_type='clustered',\n",
" se_type='clustered',\n",
" initial_update=True,\n",
")\n",
"results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are some discrepancies between our results and the original paper, but results are similar."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"pycharm": {
"stem_cell": {
"cell_type": "raw",
"metadata": {
"collapsed": false
},
"source": []
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}PKbsPj/v/v`pyblp-stable/_downloads/4014b73f9f4695fb7b551b9b865e93f6/build_differentiation_instruments.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Building Differentiation Instruments Example"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.10.0'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pyblp\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"np.set_printoptions(precision=3)\n",
"pyblp.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we'll load the automobile product data from [Berry, Levinsohn, and Pakes (1995)](https://pyblp.readthedocs.io/en/stable/references.html#berry-levinsohn-and-pakes-1995), build some very simple excluded demand-side instruments for the problem in the spirit of [Gandhi and Houde (2017)](https://pyblp.readthedocs.io/en/stable/references.html#gandhi-and-houde-2017), and demonstrate how to update the problem data to use these instrument instead of the default ones."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
"
],
"text/plain": [
" market_ids city_ids quarter product_ids firm_ids brand_ids shares \\\n",
"0 C01Q1 1 1 F1B04 1 4 0.012417 \n",
"1 C01Q1 1 1 F1B06 1 6 0.007809 \n",
"2 C01Q1 1 1 F1B07 1 7 0.012995 \n",
"3 C01Q1 1 1 F1B09 1 9 0.005770 \n",
"4 C01Q1 1 1 F1B11 1 11 0.017934 \n",
"\n",
" prices sugar mushy ... demand_instruments10 demand_instruments11 \\\n",
"0 0.072088 2 1 ... 2.116358 -0.154708 \n",
"1 0.114178 18 1 ... -7.374091 -0.576412 \n",
"2 0.132391 4 1 ... 2.187872 -0.207346 \n",
"3 0.130344 3 0 ... 2.704576 0.040748 \n",
"4 0.154823 12 0 ... 1.261242 0.034836 \n",
"\n",
" demand_instruments12 demand_instruments13 demand_instruments14 \\\n",
"0 -0.005796 0.014538 0.126244 \n",
"1 0.012991 0.076143 0.029736 \n",
"2 0.003509 0.091781 0.163773 \n",
"3 -0.003724 0.094732 0.135274 \n",
"4 -0.000568 0.102451 0.130640 \n",
"\n",
" demand_instruments15 demand_instruments16 demand_instruments17 \\\n",
"0 0.067345 0.068423 0.034800 \n",
"1 0.087867 0.110501 0.087784 \n",
"2 0.111881 0.108226 0.086439 \n",
"3 0.088090 0.101767 0.101777 \n",
"4 0.084818 0.101075 0.125169 \n",
"\n",
" demand_instruments18 demand_instruments19 \n",
"0 0.126346 0.035484 \n",
"1 0.049872 0.072579 \n",
"2 0.122347 0.101842 \n",
"3 0.110741 0.104332 \n",
"4 0.133464 0.121111 \n",
"\n",
"[5 rows x 30 columns]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)\n",
"product_data.head()"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[1. , 0.07208794, 0.01241721],\n",
" [1. , 0.11417849, 0.00780939],\n",
" [1. , 0.13239066, 0.01299451],\n",
" ...,\n",
" [1. , 0.13701741, 0.00222918],\n",
" [1. , 0.10017433, 0.01146267],\n",
" [1. , 0.12755747, 0.02620832]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"matrix = pyblp.build_matrix(formulation, product_data)\n",
"matrix"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For various reasons, we may want to absorb fixed effects into the matrix. This can be done with the `absorb` argument of [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation). We'll now re-create the matrix, absorbing product-specific fixed effects. Note that the constant column is now ignored."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"prices + shares + Absorb[product_ids]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"absorb_formulation = pyblp.Formulation('prices + shares', absorb='product_ids')\n",
"absorb_formulation"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[-0.01124832, -0.00052161],\n",
" [-0.00713476, -0.03144549],\n",
" [ 0.02367765, -0.01664996],\n",
" ...,\n",
" [ 0.03371995, -0.00779841],\n",
" [-0.00417404, -0.0117508 ],\n",
" [-0.01195648, 0.00666695]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"demeaned_matrix = pyblp.build_matrix(absorb_formulation, product_data)\n",
"demeaned_matrix"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}PKbsP4[[Cpyblp-stable/_downloads/9d1dbca71bf668298c252966819c904c/nevo.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Random Coefficients Logit Tutorial with the Fake Cereal Data"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.10.0'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pyblp\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"pyblp.options.digits = 2\n",
"pyblp.options.verbose = False\n",
"pyblp.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we'll use data from [Nevo (2000)](https://pyblp.readthedocs.io/en/stable/references.html#nevo-2000) to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the [`data`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.data.html#module-pyblp.data) module.\n",
"\n",
"## Theory of Random Coefficients Logit\n",
"\n",
"The random coefficients model extends the plain logit model by allowing for correlated tastes for different product characteristics.\n",
"In this model (indirect) utility is given by\n",
"\n",
"$$u_{ijt} = \\alpha_i p_{jt} + x_{jt} \\beta_i^\\text{ex} + \\xi_{jt} + \\epsilon_{ijt}$$\n",
"\n",
"The main addition is that $\\beta_i = (\\alpha_i, \\beta_i^\\text{ex})$ have individual specific subscripts $i$.\n",
"\n",
"Conditional on $\\beta_i$, the individual market share follow the same logit form as before. But now we must integrate over heterogeneous individuals to get the aggregate market share:\n",
"\n",
"$$s_{jt}(\\alpha, \\beta, \\theta) = \\int \\frac{\\exp(\\alpha_i p_{jt} + x_{jt} \\beta_i^\\text{ex} + \\xi_{jt})}{1 + \\sum_k \\exp(\\alpha_i p_{jt} + x_{kt} \\beta_i^\\text{ex} + \\xi_{kt})} f(\\alpha_i, \\beta_i \\mid \\theta).$$\n",
"\n",
"In general, this integral needs to be calculated numerically. This also means that we can't directly linearize the model. It is common to re-parametrize the model to separate the aspects of mean utility that all individuals agree on, $\\delta_{jt} = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt}$, from the individual specific heterogeneity, $\\mu_{ijt}(\\theta)$. This gives us\n",
"\n",
"$$s_{jt}(\\delta_{jt}, \\theta) = \\int \\frac{\\exp(\\delta_{jt} + \\mu_{ijt})}{1 + \\sum_k \\exp(\\delta_{kt} + \\mu_{ikt})} f(\\mu_{it} | \\theta).$$\n",
"\n",
"Given a guess of $\\theta$ we can solve the system of nonlinear equations for the vector $\\delta$ which equates observed and predicted market share $s_{jt} = s_{jt}(\\delta, \\theta)$. Now we can perform a linear IV GMM regression of the form\n",
"\n",
"$$\\delta_{jt}(\\theta) = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt}.$$\n",
"\n",
"The moments are constructed by interacting the predicted residuals $\\hat{\\xi}_{jt}(\\theta)$ with instruments $z_{jt}$ to form\n",
"\n",
"$$\\bar{g}(\\theta) =\\frac{1}{N} \\sum_{j,t} z_{jt}' \\hat{\\xi}_{jt}(\\theta).$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Application of Random Coefficients Logit with the Fake Cereal Data\n",
"\n",
"To include random coefficients we need to add a [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation) for the demand-side nonlinear characteristics $X_2$.\n",
"\n",
"Just like in the logit case we have the same reserved field names in `product_data`:\n",
"\n",
"- `market_ids` are the unique market identifiers which we subscript $t$.\n",
"- `shares` specifies the market share which need to be between zero and one, and within a market ID, $\\sum_{j} s_{jt} < 1$.\n",
"- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.\n",
"- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ (of which $X_2$ is typically a subset) will be automatically added to the set of instruments.\n",
"\n",
"We proceed with the following steps:\n",
"\n",
"1. Load the `product data` which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.\n",
"2. Define a [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation) for the $X_1$ (linear) demand model.\n",
"\n",
" - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas.\n",
" - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`.\n",
" - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb.\n",
" - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear.\n",
"\n",
"3. Define a [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation) for the $X_2$ (nonlinear) demand model.\n",
"\n",
" - Include only the variables over which we want random coefficients.\n",
" - Do not absorb or include fixed effects.\n",
" - It will include a random coefficient on the constant (to capture inside good vs. outside good preference) unless you specify not to with a `0` or a `-1`.\n",
"\n",
"4. Define an [`Integration`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Integration.html#pyblp.Integration) configuration to solve the market share integral from several available options:\n",
"\n",
" - Monte Carlo integration (pseudo-random draws).\n",
" - Product rule quadrature.\n",
" - Sparse grid quadrature.\n",
"\n",
"3. Combine [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation) classes, `product_data`, and the [`Integration`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Integration.html#pyblp.Integration) configuration to construct a [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem).\n",
"4. Use the [`Problem.solve`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.solve.html#pyblp.Problem.solve) method to estimate paramters.\n",
"\n",
" - It is required to specify an initial guess of the nonlinear parameters. This serves two primary purposes: speeding up estimation and indicating to the solver through initial values of zero which parameters are restricted to be always zero."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Specification of Random Taste Parameters\n",
"\n",
"It is common to assume that $f(\\beta_i \\mid \\theta)$ follows a multivariate normal distribution, and to break it up into three parts:\n",
"\n",
"1. A mean $K_1 \\times 1$ taste which all individuals agree on, $\\beta$.\n",
"2. A $K_2 \\times K_2$ covariance matrix, $\\Sigma$ .\n",
"3. Any $K_2 \\times D$ interactions, $\\Pi$, with observed $D \\times 1$ demographic data, $d_i$.\n",
"\n",
"Together this gives us that\n",
"\n",
"$$\\beta_i \\sim N(\\beta + \\Pi d_i, \\Sigma).$$\n",
"\n",
"[`Problem.solve`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.solve.html#pyblp.Problem.solve) takes an initial guess $\\Sigma_0$ of $\\Sigma$. It guarantees that $\\hat{\\Sigma}$ (the estimated parameters) will have the same sparsity structure as $\\Sigma_0$. So any zero element of $\\Sigma$ is restricted to be zero in the solution $\\hat{\\Sigma}$. For example, a popular restriction is that $\\Sigma$ is diagonal, this can be achieved by passing a diagonal matrix as $\\Sigma_0$.\n",
"\n",
"As is common with multivariate normal distributions, $\\Sigma$ is not estimated directly. Rather, its matrix square (Cholesky) root $L$ is estimated where $LL' = \\Sigma$."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading the Data\n",
"\n",
"The `product_data` argument of [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
"
],
"text/plain": [
" market_ids city_ids quarter weights nodes0 nodes1 nodes2 \\\n",
"0 C01Q1 1 1 0.05 0.434101 -1.500838 -1.151079 \n",
"1 C01Q1 1 1 0.05 -0.726649 0.133182 -0.500750 \n",
"2 C01Q1 1 1 0.05 -0.623061 -0.138241 0.797441 \n",
"3 C01Q1 1 1 0.05 -0.041317 1.257136 -0.683054 \n",
"4 C01Q1 1 1 0.05 -0.466691 0.226968 1.044424 \n",
"\n",
" nodes3 income income_squared age child \n",
"0 0.161017 0.495123 8.331304 -0.230109 -0.230851 \n",
"1 0.129732 0.378762 6.121865 -2.532694 0.769149 \n",
"2 -0.795549 0.105015 1.030803 -0.006965 -0.230851 \n",
"3 0.259044 -1.485481 -25.583605 -0.827946 0.769149 \n",
"4 0.092019 -0.316597 -6.517009 -0.230109 -0.230851 "
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_data = pd.read_csv(pyblp.data.NEVO_AGENTS_LOCATION)\n",
"agent_data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `agent formulation` tells us which columns of demographic information to interact with $X_2$."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"income + income_squared + age + child"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_formulation = pyblp.Formulation('0 + income + income_squared + age + child')\n",
"agent_formulation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This tells us to include demographic realizations for `income`, `income_squared`, `age`, and the presence of children, `child`, but to ignore other possible demographics when interacting demographics with $X_2$. We should also generally exclude the constant from the demographic formula.\n",
"\n",
"Now we configure the [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) to include the `agent_formulation` and `agent_data`, which follow the `product_formulations` and `product_data`.\n",
"\n",
"When we display the class, it lists the demographic interactions in the table of formulations and reports $D = 4$, the dimension of the demographic draws."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dimensions:\n",
"=================================================\n",
" T N F I K1 K2 D MD ED \n",
"--- ---- --- ---- ---- ---- --- ---- ----\n",
"94 2256 5 1880 1 4 4 20 1 \n",
"=================================================\n",
"\n",
"Formulations:\n",
"===================================================================\n",
" Column Indices: 0 1 2 3 \n",
"----------------------------- ------ -------------- ----- -----\n",
" X1: Linear Characteristics prices \n",
"X2: Nonlinear Characteristics 1 prices sugar mushy\n",
" d: Demographics income income_squared age child\n",
"==================================================================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nevo_problem = pyblp.Problem(\n",
" product_formulations,\n",
" product_data,\n",
" agent_formulation,\n",
" agent_data\n",
")\n",
"nevo_problem"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The initialized problem can be solved with [`Problem.solve`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.solve.html#pyblp.Problem.solve). We'll use the same starting values as [Nevo (2000)](https://pyblp.readthedocs.io/en/stable/references.html#nevo-2000). By passing a diagonal matrix as starting values for $\\Sigma$, we're choosing to ignore covariance terms. Similarly, zeros in the starting values for $\\Pi$ mean that those parameters will be fixed at zero.\n",
"\n",
"To replicate common estimates, we'll use the non-default BFGS optimization routine (with a slightly tighter tolerance to avoid getting stuck at a spurious local minimum), and we'll configure [`Problem.solve`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.solve.html#pyblp.Problem.solve) to use 1-step GMM instead of 2-step GMM."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"=======================================================================================================\n",
"GMM Objective Gradient Hessian Hessian Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Norm Min Eigenvalue Max Eigenvalue Shares Condition Number Condition Number \n",
"---- --------- -------- -------------- -------------- ------- ---------------- -----------------\n",
" 1 +4.6E+00 +6.9E-06 +2.7E-05 +1.6E+04 0 +6.9E+07 +8.4E+08 \n",
"=======================================================================================================\n",
"\n",
"Cumulative Statistics:\n",
"===========================================================================\n",
"Computation Optimizer Optimization Objective Fixed Point Contraction\n",
" Time Converged Iterations Evaluations Iterations Evaluations\n",
"----------- --------- ------------ ----------- ----------- -----------\n",
" 00:02:11 Yes 51 57 45483 141195 \n",
"===========================================================================\n",
"\n",
"Nonlinear Coefficient Estimates (Robust SEs in Parentheses):\n",
"=====================================================================================================================\n",
"Sigma: 1 prices sugar mushy | Pi: income income_squared age child \n",
"------ ---------- ---------- ---------- ---------- | ------ ---------- -------------- ---------- ----------\n",
" 1 +5.6E-01 | 1 +2.3E+00 +0.0E+00 +1.3E+00 +0.0E+00 \n",
" (+1.6E-01) | (+1.2E+00) (+6.3E-01) \n",
" | \n",
"prices +0.0E+00 +3.3E+00 | prices +5.9E+02 -3.0E+01 +0.0E+00 +1.1E+01 \n",
" (+1.3E+00) | (+2.7E+02) (+1.4E+01) (+4.1E+00)\n",
" | \n",
"sugar +0.0E+00 +0.0E+00 -5.8E-03 | sugar -3.8E-01 +0.0E+00 +5.2E-02 +0.0E+00 \n",
" (+1.4E-02) | (+1.2E-01) (+2.6E-02) \n",
" | \n",
"mushy +0.0E+00 +0.0E+00 +0.0E+00 +9.3E-02 | mushy +7.5E-01 +0.0E+00 -1.4E+00 +0.0E+00 \n",
" (+1.9E-01) | (+8.0E-01) (+6.7E-01) \n",
"=====================================================================================================================\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"==========\n",
" prices \n",
"----------\n",
" -6.3E+01 \n",
"(+1.5E+01)\n",
"=========="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"initial_sigma = np.diag([0.3302, 2.4526, 0.0163, 0.2441])\n",
"initial_pi = np.array([\n",
" [ 5.4819, 0, 0.2037, 0 ],\n",
" [15.8935, -1.2000, 0, 2.6342],\n",
" [-0.2506, 0, 0.0511, 0 ],\n",
" [ 1.2650, 0, -0.8091, 0 ]\n",
"])\n",
"tighter_bfgs = pyblp.Optimization('bfgs', {'gtol': 1e-5})\n",
"nevo_results = nevo_problem.solve(\n",
" initial_sigma,\n",
" initial_pi,\n",
" optimization=tighter_bfgs,\n",
" method='1s'\n",
")\n",
"nevo_results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Results are similar to those in the original paper with a (scaled) objective value of $q(\\hat{\\theta}) = 4.65$ and a price coefficient of $\\hat{\\alpha} = -62.7$. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Restricting Parameters\n",
"\n",
"Because the interactions between `price`, `income`, and `income_squared` are potentially collinear, we might worry that $\\hat{\\Pi}_{21} = 588$ and $\\hat{\\Pi}_{22} = -30.2$ are pushing the price coefficient in opposite directions. Both are large in magnitude but statistically insignficant. One way of dealing with this is to restrict $\\Pi_{22} = 0$.\n",
"\n",
"There are two ways we can do this:\n",
"\n",
"1. Change the initial $\\Pi_0$ values to make this term zero.\n",
"2. Change the agent formula to drop `income_squared`.\n",
"\n",
"First, we'll change the initial $\\Pi_0$ values."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"=======================================================================================================\n",
"GMM Objective Gradient Hessian Hessian Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Norm Min Eigenvalue Max Eigenvalue Shares Condition Number Condition Number \n",
"---- --------- -------- -------------- -------------- ------- ---------------- -----------------\n",
" 1 +1.5E+01 +5.2E-06 +4.7E-02 +1.7E+04 0 +6.9E+07 +5.7E+05 \n",
"=======================================================================================================\n",
"\n",
"Cumulative Statistics:\n",
"===========================================================================\n",
"Computation Optimizer Optimization Objective Fixed Point Contraction\n",
" Time Converged Iterations Evaluations Iterations Evaluations\n",
"----------- --------- ------------ ----------- ----------- -----------\n",
" 00:01:22 Yes 34 40 33651 104191 \n",
"===========================================================================\n",
"\n",
"Nonlinear Coefficient Estimates (Robust SEs in Parentheses):\n",
"=====================================================================================================================\n",
"Sigma: 1 prices sugar mushy | Pi: income income_squared age child \n",
"------ ---------- ---------- ---------- ---------- | ------ ---------- -------------- ---------- ----------\n",
" 1 +3.7E-01 | 1 +3.1E+00 +0.0E+00 +1.2E+00 +0.0E+00 \n",
" (+1.2E-01) | (+1.1E+00) (+1.0E+00) \n",
" | \n",
"prices +0.0E+00 +1.8E+00 | prices +4.2E+00 +0.0E+00 +0.0E+00 +1.2E+01 \n",
" (+9.2E-01) | (+4.6E+00) (+5.2E+00)\n",
" | \n",
"sugar +0.0E+00 +0.0E+00 -4.4E-03 | sugar -1.9E-01 +0.0E+00 +2.8E-02 +0.0E+00 \n",
" (+1.2E-02) | (+3.5E-02) (+3.2E-02) \n",
" | \n",
"mushy +0.0E+00 +0.0E+00 +0.0E+00 +8.6E-02 | mushy +1.5E+00 +0.0E+00 -1.5E+00 +0.0E+00 \n",
" (+1.9E-01) | (+6.5E-01) (+1.1E+00) \n",
"=====================================================================================================================\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"==========\n",
" prices \n",
"----------\n",
" -3.2E+01 \n",
"(+2.3E+00)\n",
"=========="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"restricted_pi = initial_pi.copy()\n",
"restricted_pi[1, 1] = 0\n",
"nevo_problem.solve(\n",
" initial_sigma,\n",
" restricted_pi,\n",
" optimization=tighter_bfgs,\n",
" method='1s'\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we'll drop both `income_squared` and the corresponding column in $\\Pi_0$."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"=======================================================================================================\n",
"GMM Objective Gradient Hessian Hessian Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Norm Min Eigenvalue Max Eigenvalue Shares Condition Number Condition Number \n",
"---- --------- -------- -------------- -------------- ------- ---------------- -----------------\n",
" 1 +1.5E+01 +5.2E-06 +4.7E-02 +1.7E+04 0 +6.9E+07 +5.7E+05 \n",
"=======================================================================================================\n",
"\n",
"Cumulative Statistics:\n",
"===========================================================================\n",
"Computation Optimizer Optimization Objective Fixed Point Contraction\n",
" Time Converged Iterations Evaluations Iterations Evaluations\n",
"----------- --------- ------------ ----------- ----------- -----------\n",
" 00:01:22 Yes 34 40 33639 104205 \n",
"===========================================================================\n",
"\n",
"Nonlinear Coefficient Estimates (Robust SEs in Parentheses):\n",
"=====================================================================================================\n",
"Sigma: 1 prices sugar mushy | Pi: income age child \n",
"------ ---------- ---------- ---------- ---------- | ------ ---------- ---------- ----------\n",
" 1 +3.7E-01 | 1 +3.1E+00 +1.2E+00 +0.0E+00 \n",
" (+1.2E-01) | (+1.1E+00) (+1.0E+00) \n",
" | \n",
"prices +0.0E+00 +1.8E+00 | prices +4.2E+00 +0.0E+00 +1.2E+01 \n",
" (+9.2E-01) | (+4.6E+00) (+5.2E+00)\n",
" | \n",
"sugar +0.0E+00 +0.0E+00 -4.4E-03 | sugar -1.9E-01 +2.8E-02 +0.0E+00 \n",
" (+1.2E-02) | (+3.5E-02) (+3.2E-02) \n",
" | \n",
"mushy +0.0E+00 +0.0E+00 +0.0E+00 +8.6E-02 | mushy +1.5E+00 -1.5E+00 +0.0E+00 \n",
" (+1.9E-01) | (+6.5E-01) (+1.1E+00) \n",
"=====================================================================================================\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"==========\n",
" prices \n",
"----------\n",
" -3.2E+01 \n",
"(+2.3E+00)\n",
"=========="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"restricted_formulation = pyblp.Formulation('0 + income + age + child')\n",
"deleted_pi = np.delete(initial_pi, 1, axis=1)\n",
"restricted_problem = pyblp.Problem(\n",
" product_formulations,\n",
" product_data,\n",
" restricted_formulation,\n",
" agent_data\n",
")\n",
"restricted_problem.solve(\n",
" initial_sigma,\n",
" deleted_pi,\n",
" optimization=tighter_bfgs,\n",
" method='1s'\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The parameter estimates and standard errors are identical for both approaches. Based on the number of fixed point iterations, there is some evidence that the solver took a slightly different path for each problem, but both restricted problems arrived at identical answers. The $\\hat{\\Pi}_{12}$ interaction term is still insignificant."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"pycharm": {
"stem_cell": {
"cell_type": "raw",
"metadata": {
"collapsed": false
},
"source": []
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}PKbsP&==Ipyblp-stable/_downloads/73667a4f6b1b660b9b6cffe11efc96f6/simulation.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Problem Simulation Tutorial"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.10.0'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pyblp\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"pyblp.options.digits = 2\n",
"pyblp.options.verbose = False\n",
"pyblp.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Before configuring and solving a problem with real data, it may be a good idea to perform Monte Carlo analysis on simulated data to verify that it is possible to accurately estimate model parameters. For example, before configuring and solving the example problems in the prior tutorials, it may have been a good idea to simulate data according to the assumed models of supply and demand. During such Monte Carlo anaysis, the data would only be used to determine sample sizes and perhaps to choose reasonable true parameters.\n",
"\n",
"Simulations are configured with the [`Simulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.html#pyblp.Simulation) class, which requires many of the same inputs as [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem). The two main differences are:\n",
"\n",
"1. Variables in formulations that cannot be loaded from `product_data` or `agent_data` will be drawn from independent uniform distributions.\n",
"2. True parameters and the distribution of unobserved product characteristics are specified.\n",
"\n",
"First, we'll use [`build_id_data`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.build_id_data.html#pyblp.build_id_data) to build market and firm IDs for a model in which there are $T = 50$ markets, and in each market $t$, a total of $J_t = 20$ products produced by $F = 10$ firms."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"id_data = pyblp.build_id_data(T=50, J=20, F=10)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll create an [`Integration`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Integration.html#pyblp.Integration) configuration to build agent data according to a Gauss-Hermite product rule that exactly integrates polynomials of degree $2 \\times 9 - 1 = 17$ or less."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Configured to construct nodes and weights according to the level-9 Gauss-Hermite product rule with options {}."
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"integration = pyblp.Integration('product', 9)\n",
"integration"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll then pass these data to [`Simulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.html#pyblp.Simulation). We'll use [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation) configurations to create an $X_1$ that consists of a constant, prices, and an exogenous characteristic; an $X_2$ that consists only of the same exogenous characteristic; and an $X_3$ that consists of the common exogenous characteristic and a cost-shifter."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dimensions:\n",
"=====================================\n",
" T N F I K1 K2 K3 \n",
"--- ---- --- --- ---- ---- ----\n",
"50 1000 10 450 3 1 2 \n",
"=====================================\n",
"\n",
"Formulations:\n",
"=================================================\n",
" Column Indices: 0 1 2 \n",
"------------------------------- --- ------ ---\n",
" X1: Linear Characteristics 1 prices x \n",
" X2: Nonlinear Characteristics x \n",
"X3: Linear Cost Characteristics x z \n",
"=================================================\n",
"\n",
"Nonlinear Coefficient True Values:\n",
"================\n",
"Sigma: x \n",
"------ --------\n",
" x +1.0E+00\n",
"================\n",
"\n",
"Beta True Values:\n",
"============================\n",
" 1 prices x \n",
"-------- -------- --------\n",
"+1.0E+00 -2.0E+00 +2.0E+00\n",
"============================\n",
"\n",
"Gamma True Values:\n",
"==================\n",
" x z \n",
"-------- --------\n",
"+1.0E+00 +4.0E+00\n",
"=================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"simulation = pyblp.Simulation(\n",
" product_formulations=(\n",
" pyblp.Formulation('1 + prices + x'),\n",
" pyblp.Formulation('0 + x'),\n",
" pyblp.Formulation('0 + x + z')\n",
" ),\n",
" beta=[1, -2, 2],\n",
" sigma=1,\n",
" gamma=[1, 4],\n",
" product_data=id_data,\n",
" integration=integration,\n",
" seed=0\n",
")\n",
"simulation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When [`Simulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.html#pyblp.Simulation) is initialized, it constructs [`Simulation.agent_data`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.html#pyblp.Simulation.agent_data) and simulates [`Simulation.product_data`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.html#pyblp.Simulation.product_data).\n",
"\n",
"The [`Simulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.html#pyblp.Simulation) can be further configured with other arguments that determine how unobserved product characteristics are simulated and how marginal costs are specified.\n",
"\n",
"At this stage, simulated variables are not consistent with true parameters, so we still need to solve the simulation with [`Simulation.replace_endogenous`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.replace_endogenous.html#pyblp.Simulation.replace_endogenous). This method replaced simulated prices and market shares with values that are consistent with the true parameters. Just like [`ProblemResults.compute_prices`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_prices.html#pyblp.ProblemResults.compute_prices), to do so it iterates over the $\\zeta$-markup equation from [Morrow and Skerlos (2011)](https://pyblp.readthedocs.io/en/stable/references.html#morrow-and-skerlos-2011)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Simulation Results Summary:\n",
"=====================================\n",
"Computation Fixed Point Contraction\n",
" Time Iterations Evaluations\n",
"----------- ----------- -----------\n",
" 00:00:01 721 721 \n",
"====================================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"simulation_results = simulation.replace_endogenous()\n",
"simulation_results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can try to recover the true parameters by creating and solving a [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem). \n",
"\n",
"The convenience method [`SimulationResults.to_problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.SimulationResults.to_problem.html#pyblp.SimulationResults.to_problem) constructs some basic \"sums of characteristics\" BLP instruments that are functions of all exogenous numerical variables in the problem. In this example, excluded demand-side instruments are the cost-shifter `z` and traditional BLP instruments constructed from `x`. Excluded supply-side instruments are traditional BLP instruments constructed from `x` and `z`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dimensions:\n",
"=================================================\n",
" T N F I K1 K2 K3 MD MS \n",
"--- ---- --- --- ---- ---- ---- ---- ----\n",
"50 1000 10 450 3 1 2 5 6 \n",
"=================================================\n",
"\n",
"Formulations:\n",
"=================================================\n",
" Column Indices: 0 1 2 \n",
"------------------------------- --- ------ ---\n",
" X1: Linear Characteristics 1 prices x \n",
" X2: Nonlinear Characteristics x \n",
"X3: Linear Cost Characteristics x z \n",
"================================================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"problem = simulation_results.to_problem()\n",
"problem"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll choose starting values that are half the true parameters so that the optimization routine has to do some work. Note that since we're jointly estimating the supply side, we need to provide an initial value for the linear coefficient on prices because this parameter cannot be concentrated out of the problem (unlike linear coefficients on exogenous characteristics)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"==============================================================================================================\n",
"GMM Objective Projected Reduced Hessian Reduced Hessian Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Gradient Norm Min Eigenvalue Max Eigenvalue Shares Condition Number Condition Number \n",
"---- --------- ------------- --------------- --------------- ------- ---------------- -----------------\n",
" 2 +6.4E+00 +6.9E-08 +7.2E+00 +3.8E+03 0 +3.7E+04 +1.5E+04 \n",
"==============================================================================================================\n",
"\n",
"Cumulative Statistics:\n",
"===========================================================================\n",
"Computation Optimizer Optimization Objective Fixed Point Contraction\n",
" Time Converged Iterations Evaluations Iterations Evaluations\n",
"----------- --------- ------------ ----------- ----------- -----------\n",
" 00:00:33 Yes 23 30 7693 24410 \n",
"===========================================================================\n",
"\n",
"Nonlinear Coefficient Estimates (Robust SEs in Parentheses):\n",
"==================\n",
"Sigma: x \n",
"------ ----------\n",
" x +7.8E-01 \n",
" (+5.2E-01)\n",
"==================\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"==================================\n",
" 1 prices x \n",
"---------- ---------- ----------\n",
" +9.6E-01 -2.0E+00 +2.1E+00 \n",
"(+9.3E-02) (+2.4E-02) (+1.4E-01)\n",
"==================================\n",
"\n",
"Gamma Estimates (Robust SEs in Parentheses):\n",
"======================\n",
" x z \n",
"---------- ----------\n",
" +9.8E-01 +4.0E+00 \n",
"(+8.7E-02) (+8.5E-02)\n",
"======================"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"results = problem.solve(\n",
" sigma=0.5 * simulation.sigma, \n",
" pi=0.5 * simulation.pi,\n",
" beta=[None, 0.5 * simulation.beta[1], None],\n",
" optimization=pyblp.Optimization('l-bfgs-b', {'gtol': 1e-5})\n",
")\n",
"results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The parameters seem to have been estimated reasonably well."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[ 1. , 0.96223514],\n",
" [-2. , -2.00792431],\n",
" [ 2. , 2.10032015]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.c_[simulation.beta, results.beta]"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[1. , 0.97820624],\n",
" [4. , 4.03121577]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.c_[simulation.gamma, results.gamma]"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[1. , 0.78358853]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"np.c_[simulation.sigma, results.sigma]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In addition to checking that the configuration for a model based on actual data makes sense, the [`Simulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Simulation.html#pyblp.Simulation) class can also be a helpful tool for better understanding under what general conditions BLP models can be accurately estimated. Simulations are also used extensively in pyblp's test suite."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}PKbsPE[Kpyblp-stable/_downloads/e4c7e4daf7e4abc64fabd807d884d4ed/logit_nested.ipynb{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Logit and Nested Logit Tutorial"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.10.0'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pyblp\n",
"import numpy as np\n",
"import pandas as pd\n",
"\n",
"pyblp.options.digits = 2\n",
"pyblp.options.verbose = False\n",
"pyblp.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this tutorial, we'll use data from [Nevo (2000)](https://pyblp.readthedocs.io/en/stable/references.html#nevo-2000) to solve the paper's fake cereal problem. Locations of CSV files that contain the data are in the [`data`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.data.html#module-pyblp.data) module.\n",
"\n",
"We will compare two simple models, the plain (IIA) logit model and the nested logit (GEV) model using the fake cereal dataset of [Nevo (2000)](https://pyblp.readthedocs.io/en/stable/references.html#nevo-2000).\n",
"\n",
"## Theory of Plain Logit\n",
"\n",
"Let's start with the plain logit model under independence of irrelevant alternatives (IIA). In this model (indirect) utility is given by\n",
"\n",
"$$U_{ijt} = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt} + \\epsilon_{ijt},$$\n",
"\n",
"where $\\epsilon_{ijt}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. It is common to normalize the mean utility of the outside good to zero so that $U_{i0t} = \\epsilon_{i0t}$. This gives us aggregate market shares\n",
"\n",
"$$s_{jt} = \\frac{\\exp(\\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt})}{1 + \\sum_k \\exp(\\alpha p_{kt} + x_{kt} \\beta^\\text{ex} + \\xi_{kt})}.$$\n",
"\n",
"If we take logs we get\n",
"\n",
"$$\\log s_{jt} = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt} - \\log \\sum_k \\exp(\\alpha p_{kt} + x_{kt} \\beta^\\text{ex} + \\xi_{kt})$$\n",
"\n",
"and\n",
"\n",
"$$\\log s_{0t} = -\\log \\sum_k \\exp(\\alpha p_{kt} + x_{kt} \\beta^\\text{ex} + \\xi_{kt}).$$\n",
"\n",
"By differencing the above we get a linear estimating equation:\n",
"\n",
"$$\\log s_{jt} - \\log s_{0t} = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt}.$$\n",
"\n",
"Because the left hand side is data, we can estimate this model using linear IV GMM."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Application of Plain Logit\n",
"\n",
"A Logit [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) can be created by simply excluding the formulation for the nonlinear parameters, $X_2$, along with any agent information. In other words, it requires only specifying the _linear component_ of demand.\n",
"\n",
"We'll set up and solve a simple version of the fake data cereal problem from [Nevo (2000)](https://pyblp.readthedocs.io/en/stable/references.html#nevo-2000). Since we won't include any demand-side nonlinear characteristics or parameters, we don't have to worry about configuring an optimization routine.\n",
"\n",
"There are some important reserved variable names:\n",
"\n",
"- `market_ids` are the unique market identifiers which we subscript with $t$.\n",
"- `shares` specifies the market shares which need to be between zero and one, and within a market ID, $\\sum_{j} s_{jt} \\leq 1$.\n",
"- `prices` are prices $p_{jt}$. These have some special properties and are _always_ treated as endogenous.\n",
"- `demand_instruments0`, `demand_instruments1`, and so on are numbered demand instruments. These represent only the _excluded_ instruments. The exogenous regressors in $X_1$ will be automatically added to the set of instruments.\n",
"\n",
"We begin with two steps:\n",
"\n",
"1. Load the product data which at a minimum consists of `market_ids`, `shares`, `prices`, and at least a single column of demand instruments, `demand_instruments0`.\n",
"2. Define a [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation) for the $X_1$ (linear) demand model.\n",
"\n",
" - This and all other formulas are similar to R and [patsy](https://patsy.readthedocs.io/en/stable/) formulas.\n",
" - It includes a constant by default. To exclude the constant, specify either a `0` or a `-1`.\n",
" - To efficiently include fixed effects, use the `absorb` option and specify which categorical variables you would like to absorb.\n",
" - Some model reduction may happen automatically. The constant will be excluded if you include fixed effects and some precautions are taken against collinearity. However, you will have to make sure that differently-named variables are not collinear.\n",
" \n",
"3. Combine the [`Formulation`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Formulation.html#pyblp.Formulation) and product data to construct a [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem).\n",
"4. Use [`Problem.solve`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.solve.html#pyblp.Problem.solve) to estimate paramters."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Loading the Data\n",
"\n",
"The `product_data` argument of [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) should be a structured array-like object with fields that store data. Product data can be a structured [NumPy](https://numpy.org/) array, a [pandas](https://pandas.pydata.org/) DataFrame, or other similar objects."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"plt.colorbar(plt.matshow(diversions[single_market]));"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The diagonal of the first image consists of own elasticities and the diagonal of the second image consists of diversion ratios to the outside good. As one might expect, own price elasticities are large and negative while cross-price elasticities are positive but much smaller.\n",
"\n",
"Elasticities and diversion ratios can be computed with respect to variables other than `prices` with the `name` argument of [`ProblemResults.compute_elasticities`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_elasticities.html#pyblp.ProblemResults.compute_elasticities) and [`ProblemResults.compute_diversion_ratios`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_diversion_ratios.html#pyblp.ProblemResults.compute_diversion_ratios). Additionally, [`ProblemResults.compute_long_run_diversion_ratios`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_long_run_diversion_ratios.html#pyblp.ProblemResults.compute_long_run_diversion_ratios) can be used to used to understand substitution when products are eliminated from the choice set.\n",
"\n",
"The convenience methods [`ProblemResults.extract_diagonals`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.extract_diagonals.html#pyblp.ProblemResults.extract_diagonals) and [`ProblemResults.extract_diagonal_means`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.extract_diagonal_means.html#pyblp.ProblemResults.extract_diagonal_means) can be used to extract information about own elasticities of demand from elasticity matrices."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"means = results.extract_diagonal_means(elasticities)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An alternative to summarizing full elasticity matrices is to use [`ProblemResults.compute_aggregate_elasticities`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_aggregate_elasticities.html#pyblp.ProblemResults.compute_aggregate_elasticities) to estimate aggregate elasticities of demand, $E$, in each market, which reflect the change in total sales under a proportional sales tax of some factor."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"aggregates = results.compute_aggregate_elasticities(factor=0.1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since demand for an entire product category is generally less elastic than the average elasticity of individual products, mean own elasticities are generally larger in magnitude than aggregate elasticities."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXAAAAD4CAYAAAD1jb0+AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAeVklEQVR4nO3de3QU9f3/8ecbTAURRSUqEC3YH4hcQoCAchOoFRQQFfAcAautYvRYq9IvVPAGbbWnXvqFSr1RtaBVvGA9+lO+P0GBKmprg+UqCkpjifCFGCUlXATC+/fHTrZJ2E02ye4mg6/HOXMyMzs7894P2ReTuXzG3B0REQmfJg1dgIiI1I0CXEQkpBTgIiIhpQAXEQkpBbiISEgdlc6NtW7d2tu3b5/OTYqIhN7KlSu/dPfMqvPTGuDt27cnPz8/nZsUEQk9M/s81vyED6GYWVMz+4eZvRZMdzCzv5nZJjN73sy+k6xiRUSkZrU5Bn4zsKHC9L3ALHfvCHwNXJPMwkREpHoJBbiZZQEjgceDaQO+DywMFpkPXJKKAkVEJLZEj4HPBn4OtAymTwJ2uvvBYLoQaBfrjWaWB+QBnH766Ye9fuDAAQoLC9m3b18tyhaJaNasGVlZWWRkZDR0KSJpV2OAm9koYIe7rzSzIeWzYywas1MVd58LzAXIzc09bJnCwkJatmxJ+/btiezYiyTG3SkuLqawsJAOHTo0dDkiaZfIHvgAYLSZjQCaAccR2SNvZWZHBXvhWcDWuhSwb98+hbfUiZlx0kknUVRU1NCliDSIGo+Bu/t0d89y9/bA5cBSd58ILAPGBYtdBbxS1yIU3lJX+t2Rb7P63Il5K/AzM/uUyDHxJ5JTkoiIJKJWAe7uy919VDC+2d37uvv/cffL3P2bpFRkltwhoU0aP/zhD6PTBw8eJDMzk1GjRiXlI8Xj7tx999107NiRTp06MXToUNavX5/UbSxfvpzjjz+enJyc6PDmm28CcOyxx9ZpnbNnz2bPnj3R6REjRrBz5864yz/66KM89dRTAMybN4+tW/9ztG3SpEl89NFHdapD5NsurXdiNlYtWrRg3bp17N27l+bNm7NkyRLatYt5UU1SPfTQQ7z33nusXr2aY445hsWLFzN69GjWr19Ps2bNkradQYMG8dprryVtfbNnz+aKK67gmGOOAWDRokXVLn/99ddHx+fNm0e3bt1o27YtAI8//njS6hJJtfJ9wsbyHBx1ZhW48MILef311wFYsGAB48ePj762e/durr76avr06UPPnj155ZXI4f6CggIGDRpEr1696NWrF++99x4Q2esdMmQI48aNo3PnzkycOJFYTz669957mTNnTjQIhw0bRv/+/XnmmWd44YUX+NnPfgbA7373O8444wwAPvvsMwYOHAhEuiaYMWMGvXr1onv37nz88cd1+uylpaWcd9550fWUf77du3czcuRIevToQbdu3Xj++ed58MEH2bp1K0OHDmXo0KHROr788ksAnnrqKbKzs+nRo0f0r5qZM2fywAMPsHDhQvLz85k4cSI5OTns3buXIUOGRLtXWLx4Mf369aNXr15cdtlllJaWAjBt2jS6dOlCdnY2U6ZMqdNnFDkiuXvaht69e3tVH330UeUZkf/ckjckoEWLFr569WofO3as792713v06OHLli3zkSNHurv79OnT/emnn3Z396+//to7duzopaWlvnv3bt+7d6+7u2/cuNHLP9+yZcv8uOOO8y1btnhZWZmfc845/s4771TaZklJiZ9wwgmH1TJ79myfPHmyb9u2zXNzc93dfezYsZ6bm+uFhYU+b948nzZtmru7f/e73/UHH3zQ3d0feughv+aaaw5bX3ktPXr0iA6ffvpp9HO7ux84cMBLSkrc3b2oqMi/973v+aFDh3zhwoU+adKk6Lp27twZ3W5RUVF0fvn0unXrvFOnTtHXiouL3d19xowZfv/997u7++DBg/3vf/979L3l00VFRT5o0CAvLS11d/ff/OY3/otf/MKLi4u9U6dOfujQoWj7V3XY75BIitQiVpK8XfI9RqbqEEogOzubgoICFixYwIgRIyq9tnjxYl599VUeeOABIHLp47/+9S/atm3LjTfeyKpVq2jatCkbN26Mvqdv375kZWUBkJOTQ0FBQXTPuTrujplx6qmnUlpayq5du9iyZQsTJkzg7bff5p133mHMmDHR5cvHe/fuzZ///OeY66zpEIq7c9ttt/H222/TpEkTvvjiC7Zv30737t2ZMmUKt956K6NGjWLQoEHV1r506VLGjRtH69atATjxxBNr/Lzl/vrXv/LRRx8xYMAAAPbv30+/fv047rjjaNasGZMmTWLkyJEpPy8hEiYK8ApGjx7NlClTWL58OcXFxdH57s5LL73EmWeeWWn5mTNncsopp7B69WoOHTpU6bj10UcfHR1v2rQpBw8erPTe4447jhYtWrB58+bo4RGADz/8kMGDBwPQr18//vjHP3LmmWcyaNAgnnzySd5//31++9vfHradWNtI1DPPPENRURErV64kIyOD9u3bs2/fPjp16sTKlStZtGgR06dPZ9iwYdx1111x11P+n09duDvnn38+CxYsOOy1Dz74gLfeeovnnnuO3//+9yxdurRO2xA50ugYeAVXX301d911F927d680f/jw4cyZMyd6HPsf//gHACUlJbRp04YmTZrw9NNPU1ZWVqvtTZ06lZtuuom9e/cC8Oabb7JixQomTJgAwLnnnssDDzzAueeeS8+ePVm2bBlHH300xx9/fH0/aiUlJSWcfPLJZGRksGzZMj7/PNJz5datWznmmGO44oormDJlCh9++CEALVu2ZNeuXYet57zzzuOFF16I/uf31VdfHbZMvPeec845vPvuu3z66acA7Nmzh40bN1JaWkpJSQkjRoxg9uzZrFq1KmmfWyTsGt8eeAOe3s3KyuLmm28+bP6dd97JLbfcQnZ2Nu5O+/btee2117jhhhsYO3YsL774IkOHDqVFixa12t5Pf/pTvv76a7p3707Tpk059dRTeeWVV2jevDkQOfSxZcsWzj33XJo2bcppp51G586da/253nnnHXJycqLTd9xxB+PGjYtOT5w4kYsuuojc3FxycnKi21i7di1Tp06lSZMmZGRk8MgjjwCQl5fHhRdeSJs2bVi2bFl0PV27duX2229n8ODBNG3alJ49ezJv3rxKtfzoRz/i+uuvp3nz5rz//vvR+ZmZmcybN4/x48fzzTeRK1LvvvtuWrZsycUXX8y+fftwd2bNmlXrzy9ypDJPY2Dm5uZ61Qc6bNiwgbPOOittNciRR79Dki4NdRmhma1099yq83UIRUQkpBTgIiIhpQAXEQkpBbiISEgpwEVEQkoBLiISUo0uwBugN1kAXn75Zcyszh1CpUPVrlgTMXPmTNq1a1epO9mdO3eyfPnyOt2WvnPnTh5++OHo9NatWytdUx5LxS5jf/3rX1d6rX///rWuQUQiGl2AN5QFCxYwcOBAnnvuuaSts663tsdTlwAHmDx5MqtWrYoOrVq1qnMNVQO8bdu2LFy4sNr3PP7443Tp0gU4PMDLe3AUkdpTgBPpTvXdd9/liSeeqBTghw4d4oYbbqBr166MGjWKESNGRMNq0aJFdO7cmYEDB3LTTTdF92ZnzpxJXl4ew4YN48orr6SsrIypU6fSp08fsrOzeeyxx2pc9y9/+Uv69OlDt27dyMvLw91jdsW6cuVKBg8eTO/evRk+fDjbtm2r0+f/4IMP6N+/Pz179qR///588sknAKxfv56+ffuSk5NDdnY2mzZtYtq0aXz22Wfk5OQwdepUCgoK6NatGwBlZWVMmTKF7t27k52dzZw5cwCiXcZOmzaNvXv3kpOTw8SJE4HKD5W4//77o+00Y8YMIHaXtiISiNVFYaqGRLqTbYDeZP3pp5/2q6++2t3d+/Xr5ytXrnR39xdffNEvvPBCLysr823btnmrVq38xRdf9L1793pWVpZv3rzZ3d0vv/zyaNezM2bM8F69evmePXvc3f2xxx7zX/3qV+7uvm/fPu/du7dv3rw57rrd/9MNq7v7FVdc4a+++qq7V+6Kdf/+/d6vXz/fsWOHu7s/99xz/uMf//iwzzZjxgxv27ZttCvZIUOGuLtX6i63pKTEDxw44O7uS5Ys8TFjxri7+4033uh/+tOf3N39m2++8T179vg///lP79q1a3T9FacffvhhHzNmTHRd5Z+jYt3lXdiWK59+4403/Nprr/VDhw55WVmZjxw50v/yl7/E7dK2InUnK+kSuu5kzawZ8DZwNJG+Uxa6+wwzmwcMBkqCRX/k7qHsaWjBggXccsstAFx++eUsWLCAXr16sWLFCi677DKaNGnCqaeeGn2Awccff8wZZ5xBhw4dABg/fjxz586Nrm/06NHR/kwWL17MmjVronvXJSUlbNq0Ke66AZYtW8Z9993Hnj17+Oqrr+jatSsXXXRRpZo/+eQT1q1bx/nnnw9E9n7btGkT8/NNnjy52gchlJSUcNVVV7Fp0ybMjAMHDgCR3hDvueceCgsLGTNmDB07dqy2Hd98802uv/56jjoq8mtVm+5kFy9ezOLFi+nZsycQ+ato06ZNDBo0qFZd2op8myTSmdU3wPfdvdTMMoAVZvY/wWtT3b36A6CNXHFxMUuXLmXdunWYGWVlZZgZ9913X8yn6ABx55er2KmVuzNnzhyGDx9eaZnyp/9UtW/fPm644Qby8/M57bTTmDlzJvv27YtZQ9euXSt1CFVXd955J0OHDuXll1+moKCAIUOGADBhwgTOPvtsXn/9dYYPH87jjz9eqevbWDXVpzvZ6dOnc9111x32Wm26tBX5NqnxGHiwB18aTGYEQyN5Ilz9LVy4kCuvvJLPP/+cgoICtmzZQocOHVixYgUDBw7kpZde4tChQ2zfvp3ly5cD0LlzZzZv3kxBQQFAtcdlhw8fziOPPBLdq924cSO7d++Ou+7ysG7dujWlpaWVThBW7Ir1zDPPpKioKBrgBw4cqPMDkUtKSqLPAK3Ye2B5X+U33XQTo0ePZs2aNXG7g4XII+EeffTR6MnbWN3JZmRkRNuiouHDh/Pkk09GH6P2xRdfsGPHjrhd2opIgicxzaypma0CdgBL3P1vwUv3mNkaM5tlZkfHeW+emeWbWX5RUVGN20r2UfCaLFiwgEsvvbTSvLFjx/Lss88yduxYsrKy6NatG9dddx1nn302xx9/PM2bN+fhhx/mggsuYODAgZxyyilx++ieNGkSXbp0oVevXtH1HDx4MO66W7VqxbXXXkv37t255JJL6NOnT3Rd5V2x5uTkUFZWxsKFC7n11lvp0aMHOTk5ca/omDVrVqXLCMv/4yn385//nOnTpzNgwIBKfZo///zzdOvWjZycHD7++GOuvPJKTjrpJAYMGEC3bt2YOnXqYZ/19NNPjz4T89lnnz2slry8PLKzs6MnMcsNGzaMCRMm0K9fP7p37864cePYtWsXa9eujZ5Iveeee7jjjjtifkaRb6NadSdrZq2Al4GfAsXA/wLfAeYCn7n7L6t7fxi7ky0tLeXYY4+luLiYvn378u6770Yfd3bsscfi7vzkJz+hY8eOTJ48OSnrltpp7L9DcuRobN3J1uqBDu6+08yWAxe4+wPB7G/M7I/AEfm48FGjRrFz507279/PnXfeGQ3YP/zhD8yfP5/9+/fTs2fPmMdu67puEZFEJHIVSiZwIAjv5sAPgHvNrI27b7PIWatLgHUprrVBlB+brmry5Mm13uNOdN0iIolIZA+8DTDfzJoSOWb+gru/ZmZLg3A3YBVwfV2LqM/VC/LtVptDgCJHmhoD3N3XAD1jzP9+Mgpo1qwZxcXFnHTSSQpxqRV3p7i4mGbNmjV0KSINosEfapyVlUVhYSGJXKEiUlWzZs3Iyspq6DJEGkSDB3hGRkb0jkYREUmcOrMSEQkpBbiISEgpwEVEQkoBLiISUgpwEZGQUoCLiISUAlxEJKQU4CIiIaUAFxEJKQW4iEhIKcBFREJKAS4iElIKcBGRkFKAi4iElAJcRCSkagxwM2tmZh+Y2WozW29mvwjmdzCzv5nZJjN73sy+k/pyRUSkXCJ74N8A33f3HkAOcIGZnQPcC8xy947A18A1qStTRESqqjHAPaI0mMwIBge+DywM5s8n8mR6ERFJk4SOgZtZUzNbBewAlgCfATvd/WCwSCHQLjUliohILAkFuLuXuXsOkAX0Bc6KtVis95pZnpnlm1m+HlwsIpI8tboKxd13AsuBc4BWZlb+UOQsYGuc98x191x3z83MzKxPrSIiUkEiV6FkmlmrYLw58ANgA7AMGBcsdhXwSqqKFBGRwx1V8yK0AeabWVMigf+Cu79mZh8Bz5nZ3cA/gCdSWKeIiFRRY4C7+xqgZ4z5m4kcDxcRkQagOzFFREJKAS4iElIKcBGRkFKAi4iElAJcRCSkFOAiIiGlABcRCSkFuIhISCnARURCSgEuIhJSCnARkZBSgIuIhJQCXEQkpBTgIiIhpQAXEQkpBbiISEgpwEVEQkoBLiISUok81Pg0M1tmZhvMbL2Z3RzMn2lmX5jZqmAYkfpyRUSkXCIPNT4I/Je7f2hmLYGVZrYkeG2Wuz+QuvJERCSeRB5qvA3YFozvMrMNQLtUFyYiItWr1TFwM2tP5An1fwtm3Whma8zsSTM7Ic578sws38zyi4qK6lWsiIj8R8IBbmbHAi8Bt7j7v4FHgO8BOUT20H8b633uPtfdc909NzMzMwkli4gIJBjgZpZBJLyfcfc/A7j7dncvc/dDwB+AvqkrU0REqkrkKhQDngA2uPt/V5jfpsJilwLrkl+eiIjEk8hVKAOAHwJrzWxVMO82YLyZ5QAOFADXpaRCERGJKZGrUFYAFuOlRckvR0REEqU7MUVEQkoBLiISUgpwEZGQUoCLiISUAlxEJKQU4CIiIaUAFxEJKQW4iEhIKcBFREJKAS4iElIKcBGRKswiQ2OnABcRCSkFuIhISCnARURCSgEuIhJSCnARkZBSgIuIhFQiz8Q8zcyWmdkGM1tvZjcH8080syVmtin4eULqyxURkXKJ7IEfBP7L3c8CzgF+YmZdgGnAW+7eEXgrmBYRkTSpMcDdfZu7fxiM7wI2AO2Ai4H5wWLzgUtSVaSIiByuVsfAzaw90BP4G3CKu2+DSMgDJye7OBERiS/hADezY4GXgFvc/d+1eF+emeWbWX5RUVFdahQRkRgSCnAzyyAS3s+4+5+D2dvNrE3wehtgR6z3uvtcd89199zMzMxk1CwiIiR2FYoBTwAb3P2/K7z0KnBVMH4V8EryyxMRkXiOSmCZAcAPgbVmtiqYdxvwG+AFM7sG+BdwWWpKFBGRWGoMcHdfAcTrWPG85JYjIiKJ0p2YIiIhpQAXEQkpBbiISEgpwEVEQkoBLiISUgpwEZGQUoCLiISUAlxEJKQU4CIiIaUAFxEJKQW4iEhIKcBFREJKAS4iElIKcBGRkFKAi4iElAJcRCSkFOAiIiGlABcRCalEHmr8pJntMLN1FebNNLMvzGxVMIxIbZkiIlJVInvg84ALYsyf5e45wbAouWWJiEhNagxwd38b+CoNtYiISC3U5xj4jWa2JjjEckK8hcwsz8zyzSy/qKioHpsTEWmczCJDutU1wB8BvgfkANuA38Zb0N3nunuuu+dmZmbWcXMiIlJVnQLc3be7e5m7HwL+APRNblkiIlKTOgW4mbWpMHkpsC7esiIikhpH1bSAmS0AhgCtzawQmAEMMbMcwIEC4LoU1igiIjHUGODuPj7G7CdSUIuIiNSC7sQUEQkpBbiISEgpwEVEQkoBLiISUgpwEZGQUoCLiISUAlxEJKQU4CIiIaUAFxEJKQW4iEhIKcBFREJKAS4iElIKcBGRkFKAi4iElAJcRCSkauwPXI4Q5U9cdW/YOkS+RSo+6DgVXz3tgYuIhJQCXEQkpGoMcDN70sx2mNm6CvNONLMlZrYp+HlCassUEZGqEtkDnwdcUGXeNOAtd+8IvBVMi4hIGtUY4O7+NvBVldkXA/OD8fnAJUmuS0REalDXY+CnuPs2gODnyfEWNLM8M8s3s/yioqI6bu4IYfafIdXbEJEjXspPYrr7XHfPdffczMzMVG9ORORbo64Bvt3M2gAEP3ckryQREUlEXQP8VeCqYPwq4JXklCMiIolK5DLCBcD7wJlmVmhm1wC/Ac43s03A+cG0iIikUY230rv7+DgvnZfkWiRRqb4/V0RCQXdiioiElAJcRCSkFOAiIiGlABcRCSn1B96Y6WSliFRDe+AiIiGlABcRCSkFuIhISCnARURCSgEuIhJSCvCK4vWlrT62RaQRUoCLiISUAlxEJKQU4CIiIaUAFxEJKd1KXx/lJzbre5u7bpkXkTrQHriISEjVaw/czAqAXUAZcNDdc5NRlIiI1CwZh1CGuvuXSViPiIjUgg6hiIiEVH0D3IHFZrbSzPJiLWBmeWaWb2b5RUVF9dxcSOjOTZFQCPtXtb4BPsDdewEXAj8xs3OrLuDuc909191zMzMz67k5EREpV68Ad/etwc8dwMtA32QUJSIiNatzgJtZCzNrWT4ODAPWJaswERGpXn2uQjkFeNkiB5COAp519/+XlKpERKRGdQ5wd98M9EhiLSIiUgu6jFBEJKQU4CIiIaUAFxEJKQW4iEhIKcBFRELqyO0PXH1sJyZen+bJ6utcRFJGe+AiIiGlABcRCSkFuIhISCnARURCKvwnMetyslIn6ETkCKA9cBGRkFKAi4iElAJcRCSkFOAiIiEVnpOYurOycdAJYGmkqouII/XXVnvgIiIhpQAXEQmpegW4mV1gZp+Y2admNi1ZRYmISM3q81T6psBDwIVAF2C8mXVJVmEiIlK9+uyB9wU+dffN7r4feA64ODlliYhITepzFUo7YEuF6ULg7KoLmVkekBdMlprZJ8F4a+DLOm254unm1MyPXVuy1l+/dVWuLfVtkej8uv97pl5jra2x1gUhr62BvhatzeLXVV0cJOC7sWbWJ8BjlXPYRTruPheYe9ibzfLdPbce208Z1VZ7jbUuaLy1Nda6QLXVRUPUVZ9DKIXAaRWms4Ct9StHREQSVZ8A/zvQ0cw6mNl3gMuBV5NTloiI1KTOh1Dc/aCZ3Qi8ATQFnnT39bVYxWGHVRoR1VZ7jbUuaLy1Nda6QLXVRdrrMj/S7i0VEfmW0J2YIiIhpQAXEQmptAe4mU0xM7fItdaxXi8zs1XBkNaTognUdpWZbQqGq9JQz6/MbE3QFovNrG2c5dLeZrWoLa1tFmzzfjP7OKjvZTNrFWe5AjNbG3yG/EZUV9q7qDCzy8xsvZkdMrO4l8Klu81qWVta283MTjSzJcHv9hIzOyHOcqn7frp72gYilx2+AXwOtI6zTGk6a0q0NuBEYHPw84Rg/IQU13RchfGbgEcbS5slUltDtFmw3WHAUcH4vcC9cZYriPd72FB1Ebkg4DPgDOA7wGqgSxpqOws4E1gO5FazXFrbLNHaGqLdgPuAacH4tGp+z1L2/Uz3Hvgs4OfEuOGnEaiptuHAEnf/yt2/BpYAF6SyIHf/d4XJFtXUlnYJ1pb2NgtqW+zuB4PJvxK5R6HBJVhXg3RR4e4b3P2TmpdMvwRra4h2uxiYH4zPBy5J8fYOk7YAN7PRwBfuvrqGRZuZWb6Z/dXM0tIgCdYWq+uAdiktDDCze8xsCzARuCvOYmlvswRra5A2q+Jq4H/ivObAYjNbGXT5kE7x6moMbVadhmyz6jREu53i7tsAgp8nx1kuZd/PpD6Rx8zeBE6N8dLtwG1E/oSsyenuvtXMzgCWmtlad/+sEdSWUNcByazL3V9x99uB281sOnAjMCPGsmlvswRrS0mbJVJbsMztwEHgmTirGRC028nAEjP72N3fbuC6GrTNEpD0NktSbWn/ftZiNSn5fkKSA9zdfxBrvpl1BzoAqy3So0sW8KGZ9XX3/62yjq3Bz81mthzoSeTYVkPXVggMqTCdReSYXErqiuFZ4HViBHi626wWtaWkzaDm2oITpqOA8zw4EBljHeXttsPMXibyZ3i9wigJdaWsi4pa/HtWt46kt1mSaktJu1VXl5ltN7M27r7NzNoAO+KsIyXfz/KVp30gzokQIie6jg7GWwObSMMJnARrOxH4Z1DjCcH4iSmupWOF8Z8CCxtLmyVYW9rbLNjuBcBHQGY1y7QAWlYYfw+4oBHUdRSRk70d+M/JuK6pbrMK219O/BOFaW+zWtSW9nYD7qfyScz7YiyT0u9nWho+xoeKhiSQCzwejPcH1gaNvxa4prHUFkxfDXwaDD9OQy0vAeuANcD/Bdo1ljZLpLaGaLNgm58SOR66KhgeDea3BRYF42cEbbYaWE/kT/UGryuYHgFsJLKXlvK6gm1eSmQv9htgO/BGY2izRGtriHYDTgLeCkL5LYKdk3R+P3UrvYhISOlOTBGRkFKAi4iElAJcRCSkFOAiIiGlABcRCSkFuIhISCnARURC6v8Dvav54rjETDoAAAAASUVORK5CYII=\n",
"text/plain": [
"
"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"plt.hist(\n",
" [means.flatten(), aggregates.flatten()], \n",
" color=['red', 'blue'], \n",
" bins=50\n",
");\n",
"plt.legend(['Mean Own Elasticities', 'Aggregate Elasticities']);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Marginal Costs and Markups\n",
"\n",
"To compute marginal costs, $c$, the `product_data` passed to [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) must have had a `firm_ids` field. Since we included firm IDs when configuring the problem, we can use [`ProblemResults.compute_costs`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_costs.html#pyblp.ProblemResults.compute_costs)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAXa0lEQVR4nO3de3CV9Z3H8fdXLmLBK8SKRDa4g0uRS6gpoUBZFbWCKI6KQFGj2GF3Leou2wpI0Q5jO3gZqo6dWooVdC2gSEeUdV0WRasD2iABoqhcRIiwmFJhFUVI+e4f50l6DCeQc55zTsIvn9fMmfOc5/rlycMnv/yeyzF3R0REwnJcUxcgIiLZp3AXEQmQwl1EJEAKdxGRACncRUQC1LqpCwDo1KmTFxUVNXUZIiLHlNWrV//Z3QtSTWsW4V5UVER5eXlTlyEickwxs48amqZuGRGRACncRUQCpHAXEQlQs+hzF5H8OXjwIFVVVezfv7+pS5FGateuHYWFhbRp06bRyyjcRVqYqqoqTjzxRIqKijCzpi5HjsLd2b17N1VVVXTr1q3Ry6lbRqSF2b9/Px07dlSwHyPMjI4dO6b9l5bCXaQFUrAfWzL5eSncRUQCpD53kRauaMrSrK5v68zLjjqPmXHdddfx5JNPAlBTU0Pnzp0pLS3lhRdeyEodP/zhD5k0aRI9e/ZMe9m5c+dSXl7OI488cti0F198kenTp7Nv3z7cnREjRvDAAw+ktf6Kigp27NjB8OHD066tsRTu0qIcKcgaE0qSHe3bt6eyspIvv/ySE044gWXLltGlS5e01lFTU0Pr1g1H2Jw5c+KWeZjKykomTpzI0qVL6dGjBzU1NcyePTvt9VRUVFBeXp7TcFe3jIg0iWHDhrF0aeKX7fz58xk7dmzdtLfeeouBAwfSr18/Bg4cyPvvvw8kWtSjRo3i8ssv55JLLuHQoUPccsstnHvuuYwYMYLhw4ezaNEiAM4///y6x5p06NCBadOm0bdvXwYMGMCuXbsAeP755yktLaVfv35cdNFFdeMbct999zFt2jR69OgBQOvWrbnlllsA+Oijjxg6dCh9+vRh6NChbNu2DYBnnnmGXr160bdvX4YMGcKBAwe46667WLhwIcXFxSxcuJBXX32V4uJiiouL6devH5999lns/atwF5EmMWbMGBYsWMD+/ftZt24dpaWlddN69OjBa6+9xpo1a5gxYwZ33nln3bSVK1cyb948Xn75ZRYvXszWrVtZv349c+bMYeXKlSm3tW/fPgYMGMDatWsZMmQIv/3tbwEYPHgwq1atYs2aNYwZM4b77rvviDVXVlZy3nnnpZw2ceJEbrjhBtatW8e4ceO47bbbAJgxYwYvvfQSa9euZcmSJbRt25YZM2YwevRoKioqGD16NA888AC/+tWvqKio4I9//CMnnHBCWvsyFXXLyDGtoW4WdbE0f3369GHr1q3Mnz//sO6JvXv3UlZWxsaNGzEzDh48WDft4osv5rTTTgPg9ddfZ9SoURx33HGcccYZXHDBBSm31bZtW0aMGAHAeeedx7Jly4DENf+jR49m586dHDhwIK3ryOtbuXIlixcvBuD666/njjvuAGDQoEHceOONXHvttVx11VUplx00aBCTJk1i3LhxXHXVVRQWFmZcRy213EWkyVxxxRX8+Mc//lqXDMD06dO54IILqKys5Pnnn//aNd7t27evG3b3Rm2nTZs2dZcTtmrVipqaGgBuvfVWJk6cyPr16/nNb35z1GvJzz33XFavXt2obdZu79FHH+Wee+5h+/btFBcXs3v37sPmnTJlCnPmzOHLL79kwIABvPfee43axpEo3EWkyYwfP5677rqL3r17f2383r17606wzp07t8HlBw8ezLPPPsuhQ4fYtWsXK1asSGv7yduZN2/eUef/yU9+wi9+8Qs++OADAA4dOsSsWbMAGDhwIAsWLADgqaeeYvDgwQBs3ryZ0tJSZsyYQadOndi+fTsnnnji1/rVN2/eTO/evZk8eTIlJSVZCfejdsuY2e+AEcAn7t4rGnc/cDlwANgM3OTue6JpU4Gbgb8Ct7n7S7GrFJGcacourMLCQm6//fbDxt9xxx2UlZUxa9YsLrzwwgaXv/rqq1m+fDm9evXinHPOobS0lJNPPrnR2//Zz37GqFGj6NKlCwMGDODDDz884vx9+vThwQcfZOzYsXzxxReYGZddlth/Dz/8MOPHj+f++++noKCAxx9/HEj8Qti4cSPuztChQ+nbty9du3Zl5syZFBcXM3XqVF5//XVeeeUVWrVqRc+ePRk2bFij/w0NsaP9WWNmQ4DPgSeSwv0S4GV3rzGzewHcfbKZ9QTmA/2BM4H/Ac5x978eaRslJSWuL+uQTKTb565LIWHDhg1861vfauoysubzzz+nQ4cO7N69m/79+/PGG29wxhlnNHVZWZfq52Zmq929JNX8R225u/trZlZUb9x/J31cBVwTDY8EFrj7V8CHZraJRNCnPoUtIhLTiBEj2LNnDwcOHGD69OlBBnsmsnG1zHhgYTTchUTY16qKxomI5ES6/ewtRaxwN7NpQA3wVO2oFLOl7PcxswnABICuXbvGKUMkp0K83NLd9fCwY0hjrwpKlvHVMmZWRuJE6zj/25argLOSZisEdqRa3t1nu3uJu5cUFKT88m4RyYF27dqxe/fujAJD8q/2ee7t2rVLa7mMWu5mdikwGfhHd/8iadIS4PdmNovECdXuwFuZbENEcqOwsJCqqiqqq6ubuhRppNpvYkpHYy6FnA+cD3QysyrgbmAqcDywLPrTbpW7/7O7v2NmTwPvkuiu+dHRrpQRkfxq06ZNrDsx5djQmKtlxqYY/dgR5v858PM4RYmISDx6toxIJNvPNRdpSnr8gIhIgBTuIiIBUreMBEldLNLSKdxFMhTizU0SDnXLiIgESOEuIhIghbuISIAU7iIiAVK4i4gESOEuIhIghbuISIAU7iIiAVK4i4gESOEuIhIghbuISID0bBmRPNGzaCSf1HIXEQmQwl1EJEAKdxGRACncRUQCpHAXEQmQwl1EJEAKdxGRAB013M3sd2b2iZlVJo07zcyWmdnG6P3UaLyZ2cNmtsnM1pnZt3NZvIiIpNaYlvtc4NJ646YAy929O7A8+gwwDOgevSYAv85OmSIiko6j3qHq7q+ZWVG90SOB86PhecAKYHI0/gl3d2CVmZ1iZp3dfWe2ChZp7hq6E1UknzLtc/9mbWBH76dH47sA25Pmq4rGHcbMJphZuZmVV1dXZ1iGiIikku0TqpZinKea0d1nu3uJu5cUFBRkuQwRkZYt03DfZWadAaL3T6LxVcBZSfMVAjsyL09ERDKRabgvAcqi4TLguaTxN0RXzQwA9qq/XUQk/456QtXM5pM4edrJzKqAu4GZwNNmdjOwDRgVzf6fwHBgE/AFcFMOapYApPv4W52kFElPY66WGdvApKEp5nXgR3GLEhGReHSHqohIgPRNTNKsqPtFJDvUchcRCZDCXUQkQAp3EZEAKdxFRAKkcBcRCZDCXUQkQAp3EZEAKdxFRAKkcBcRCZDCXUQkQAp3EZEAKdxFRAKkcBcRCZDCXUQkQAp3EZEAKdxFRAKkcBcRCZDCXUQkQAp3EZEAKdxFRAKkcBcRCZDCXUQkQLHC3cz+zczeMbNKM5tvZu3MrJuZvWlmG81soZm1zVaxIiLSOBmHu5l1AW4DSty9F9AKGAPcC/zS3bsDnwI3Z6NQERFpvNZZWP4EMzsIfAPYCVwI/CCaPg/4GfDrmNsRCVbRlKUpx2+deVmeK5GQZNxyd/ePgQeAbSRCfS+wGtjj7jXRbFVAl1TLm9kEMys3s/Lq6upMyxARkRTidMucCowEugFnAu2BYSlm9VTLu/tsdy9x95KCgoJMyxARkRTinFC9CPjQ3avd/SCwGBgInGJmtd09hcCOmDWKiEia4oT7NmCAmX3DzAwYCrwLvAJcE81TBjwXr0QREUlXxidU3f1NM1sEvA3UAGuA2cBSYIGZ3RONeywbhcqxqaGThSKSW7GulnH3u4G7643eAvSPs14RaZiurpHG0B2qIiIBUriLiARI4S4iEiCFu4hIgBTuIiIBUriLiAQo7oPDRABdz54L2qcSh1ruIiIBUriLiARI4S4iEiCFu4hIgBTuIiIB0tUyIoHQA8UkmVruIiIBUriLiARI4S4iEiCFu4hIgBTuIiIBUriLiARI4S4iEiCFu4hIgBTuIiIBUriLiARI4S4iEqBY4W5mp5jZIjN7z8w2mNl3zew0M1tmZhuj91OzVayIiDRO3Jb7Q8B/uXsPoC+wAZgCLHf37sDy6LOIiORRxuFuZicBQ4DHANz9gLvvAUYC86LZ5gFXxi1SRETSE6flfjZQDTxuZmvMbI6ZtQe+6e47AaL301MtbGYTzKzczMqrq6tjlCEiIvXFCffWwLeBX7t7P2AfaXTBuPtsdy9x95KCgoIYZYiISH1xvqyjCqhy9zejz4tIhPsuM+vs7jvNrDPwSdwiRSRz+hKPlinjlru7/y+w3cz+IRo1FHgXWAKURePKgOdiVSgiImmL+zV7twJPmVlbYAtwE4lfGE+b2c3ANmBUzG2IiEiaYoW7u1cAJSkmDY2zXmm+GvoTX0SaF92hKiISIIW7iEiAFO4iIgFSuIuIBEjhLiISIIW7iEiAFO4iIgFSuIuIBEjhLiISIIW7iEiAFO4iIgFSuIuIBEjhLiISIIW7iEiAFO4iIgFSuIuIBEjhLiISoLhfsyeB0jcuiRzb1HIXEQmQwl1EJEDqlhFpoRrqets687I8VyK5oJa7iEiAFO4iIgFSuIuIBCh2uJtZKzNbY2YvRJ+7mdmbZrbRzBaaWdv4ZYqISDqy0XK/HdiQ9Ple4Jfu3h34FLg5C9sQEZE0xAp3MysELgPmRJ8NuBBYFM0yD7gyzjZERCR9cVvuDwJ3AIeizx2BPe5eE32uArqkWtDMJphZuZmVV1dXxyxDRESSZRzuZjYC+MTdVyePTjGrp1re3We7e4m7lxQUFGRahoiIpBDnJqZBwBVmNhxoB5xEoiV/ipm1jlrvhcCO+GWKSL4c6blCusHp2JFxy93dp7p7obsXAWOAl919HPAKcE00WxnwXOwqRUQkLbm4zn0yMMnMNpHog38sB9sQEZEjyMqzZdx9BbAiGt4C9M/GekVEJDO6Q1VEJEAKdxGRACncRUQCpHAXEQmQwl1EJEAKdxGRACncRUQCpHAXEQmQwl1EJEBZuUNVRFqGhh4qpgeKNT8K9xbgSE/5E8kGhX7zo24ZEZEAKdxFRAKkcBcRCZDCXUQkQAp3EZEAKdxFRAKkcBcRCZDCXUQkQAp3EZEAKdxFRAKkcBcRCZCeLSMieadn0eSeWu4iIgHKuOVuZmcBTwBnAIeA2e7+kJmdBiwEioCtwLXu/mn8UlsetW7kWKcnkjadON0yNcC/u/vbZnYisNrMlgE3AsvdfaaZTQGmAJPjlypHo/9IIlIr424Zd9/p7m9Hw58BG4AuwEhgXjTbPODKuEWKiEh6stLnbmZFQD/gTeCb7r4TEr8AgNMbWGaCmZWbWXl1dXU2yhARkUjscDezDsCzwL+6+/81djl3n+3uJe5eUlBQELcMERFJEutSSDNrQyLYn3L3xdHoXWbW2d13mlln4JO4RYpIy6CLCLIn45a7mRnwGLDB3WclTVoClEXDZcBzmZcnIiKZiNNyHwRcD6w3s4po3J3ATOBpM7sZ2AaMileiiIikK+Nwd/fXAWtg8tBM1ysiIvHpDlURkQAp3EVEAqRwFxEJkMJdRCRACncRkQDpee7NgB74JZIZ3fTUMLXcRUQCpJZ7HmWrha6WvrQ0OubTp5a7iEiAFO4iIgFSuIuIBEjhLiISIIW7iEiAFO4iIgHSpZAi0uKFeDOUwj2GEA8IkRDo/6a6ZUREgqSWew7objqR5qkl/d9Uy11EJEBquTdCS/ptLyJhULiLiGSguZ+0VbeMiEiAFO4iIgFSuIuIBMjcPTcrNrsUeAhoBcxx95kNzVtSUuLl5eVZryHdPjGdOBWRXMlFX7yZrXb3kpTTchHuZtYK+AC4GKgC/gSMdfd3U80fJ9wVyCJyLIsT+kcK91x1y/QHNrn7Fnc/ACwARuZoWyIiUk+uLoXsAmxP+lwFlCbPYGYTgAnRx8/N7P0c1ZKuTsCfm7qII1B9mWvOtYHqi6M51wZHqM/ujbXev2toQq7C3VKM+1r/j7vPBmbnaPsZM7Pyhv7MaQ5UX+aac22g+uJozrVB09SXq26ZKuCspM+FwI4cbUtEROrJVbj/CehuZt3MrC0wBliSo22JiEg9OemWcfcaM5sIvETiUsjfufs7udhWDjS7rqJ6VF/mmnNtoPriaM61QRPUl7Pr3EVEpOnoDlURkQAp3EVEAhR8uJvZpWb2vpltMrMpKaYfb2YLo+lvmllR0rSp0fj3zez7jV1nrmszs4vNbLWZrY/eL0xaZkW0zorodXoT1FdkZl8m1fBo0jLnRXVvMrOHzSzVZbO5rm9cUm0VZnbIzIqjafncf0PM7G0zqzGza+pNKzOzjdGrLGl8VvZfprWZWbGZrTSzd8xsnZmNTpo218w+TNp3xZnUFqe+aNpfk2pYkjS+W3QcbIyOi7b5rM3MLqh33O03syujaVnbd3XcPdgXiZO5m4GzgbbAWqBnvXluAR6NhscAC6PhntH8xwPdovW0asw681BbP+DMaLgX8HHSMiuAkibed0VAZQPrfQv4Lol7IV4EhuW7vnrz9Aa2NNH+KwL6AE8A1ySNPw3YEr2fGg2fmq39F7O2c4Du0fCZwE7glOjz3OR5m2LfRdM+b2C9TwNjouFHgX/Jd231fsZ/Ab6RzX2X/Aq95d6YxyCMBOZFw4uAoVFraCSwwN2/cvcPgU3R+rL1aIWMa3P3Ne5ee9/AO0A7Mzs+gxpyUl9DKzSzzsBJ7r7SE0f0E8CVTVzfWGB+hjXEqs/dt7r7OuBQvWW/Dyxz97+4+6fAMuDSLO6/jGtz9w/cfWM0vAP4BCjIoIac1NeQ6Od+IYnjABLHRV73XT3XAC+6+xcZ1NAooYd7qscgdGloHnevAfYCHY+wbGPWmevakl0NrHH3r5LGPR79aTc9RrdH3Pq6mdkaM3vVzL6XNH/VUdaZr/pqjebwcM/X/kt32Wztv6wcw2bWn0TrdXPS6J9H3TW/jNHgiFtfOzMrN7NVtd0eJH7ue6LjIJN1Zqu2WmM4/LjLxr6rE3q4H/UxCEeYJ93x6YpTW2Ki2bnAvcA/JU0f5+69ge9Fr+szqC1ufTuBru7eD5gE/N7MTmrkOvNRX2KiWSnwhbtXJk3P5/5Ld9l8HntHXkHir4gngZvcvbaFOhXoAXyHRLfD5Axqy0Z9XT1xq/8PgAfN7O+zsM5s1Va773qTuA+oVrb2XZ3Qw70xj0Gom8fMWgMnk+gLa2jZbD1aIU5tmFkh8AfgBnevazm5+8fR+2fA70n8GZmJjOuLurJ2R3WsJtGyOyeav/Ao68x5fUnTD2s95Xn/pbtstvZfrGM4+kW9FPipu6+qHe/uOz3hK+Bxmmbf1XYX4e5bSJxD6UfioV2nRMdB2uvMVm2Ra4E/uPvBpJqzte/qhB7ujXkMwhKg9mqEa4CXo/7MJcAYS1xx0Q3oTuJkVrYerZBxbWZ2Con/XFPd/Y3amc2stZl1iobbACOASjITp74CSzzTHzM7m8S+2+LuO4HPzGxA1N1xA/BcvuuL6joOGEWiz5RoXL73X0NeAi4xs1PN7FTgEuClLO6/jGuL5v8D8IS7P1NvWufo3Uj0Z+d930X77PhouBMwCHg3+rm/QuI4gMRxkdd9l+Sw8zxZ3Hd/k82zs83xBQwn8cUhm4Fp0bgZwBXRcDvgGRInTN8Czk5adlq03PskXZWQap35rA34KbAPqEh6nQ60B1YD60icaH0IaNUE9V0dbX8t8DZwedI6S0gcuJuBR4jukm6Cn+35wKp668v3/vsOiZbgPmA38E7SsuOjujeR6PrI6v7LtDbgOuBgvWOvOJr2MrA+qu8/gA753nfAwKiGtdH7zUnrPDs6DjZFx8XxTfBzLQI+Bo6rt86s7bvalx4/ICISoNC7ZUREWiSFu4hIgBTuIiIBUriLiARI4S4iEiCFu4hIgBTuIiIB+n+QzFUnbO0oFgAAAABJRU5ErkJggg==\n",
"text/plain": [
"
"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"costs = results.compute_costs()\n",
"plt.hist(costs, bins=50);\n",
"plt.legend([\"Marginal Costs\"]);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Other methods that compute supply-side outputs often compute marginal costs themselves. For example, [`ProblemResults.compute_markups`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_markups.html#pyblp.ProblemResults.compute_markups) will compute marginal costs when estimating markups, $\\mathscr{M}$, but computation can be sped up if we just use our pre-computed values."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD4CAYAAAAXUaZHAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAVTUlEQVR4nO3df5DddX3v8ee7JDYSQX5k4WZI0gUNAo1p4l0JgnpBCgMpEi3gJVMwKLrASK+1pZpSrQx3OsO0/Bg7LdpYMkkcDdDyK1F6LaNSSgFhQ2iaGNPyY4tbMiQGb9QJsQbe94/zTe5JOJs9u+ec3ewnz8fMzjnfz/f7Pd/3J7t57Xc/53s+38hMJEll+ZWxLkCS1H6GuyQVyHCXpAIZ7pJUIMNdkgo0YawLAJgyZUp2d3ePdRmSNK6sWbPmx5nZ1WjdARHu3d3d9PX1jXUZkjSuRMR/DLbOYRlJKpDhLkkFMtwlqUAHxJi7pIPHL3/5SwYGBti5c+dYlzJuTJo0iWnTpjFx4sSm9zHcJY2qgYEBDjvsMLq7u4mIsS7ngJeZbNu2jYGBAY4//vim93NYRtKo2rlzJ0cffbTB3qSI4Oijjx72XzqGu6RRZ7APz0j+vQx3SSqQY+6SxlT34m+19fX6b/qtIbeJCC677DK+9rWvAbBr1y6mTp3KvHnz+OY3v9n0sR5++GFuvvnmYe0zWgz3Dhjsh7WZHzpJnTd58mTWr1/Pq6++ypvf/GYeeughjjvuuGG9xq5duzpUXXs4LCPpoHT++efzrW/VTsRWrlzJwoUL96x78sknOf3005k7dy6nn346mzZtAmDZsmVccsklfPCDH+Tcc8/d6/Weeuop5s6dy/PPP88NN9zAzTffvGfdrFmz6O/vp7+/n5NOOolFixYxe/ZsLr74Ynbs2AHA4sWLOeWUU5g9ezbXXXddy/0z3CUdlC699FLuvPNOdu7cybp165g3b96edSeddBKPPPIIa9eu5cYbb+T666/fs+7xxx9n+fLlfPe7393T9thjj3H11VfzwAMPcMIJJ+z3uJs2baK3t5d169Zx+OGHc/vtt/PKK69w3333sWHDBtatW8fnP//5lvtnuEs6KM2ePZv+/n5WrlzJ/Pnz91q3fft2LrnkEmbNmsVnPvMZNmzYsGfdOeecw1FHHbVneePGjfT29rJ69WpmzJgx5HGnT5/OGWecAcBll13Go48+yuGHH86kSZP4xCc+wb333suhhx7acv8Md0kHrQsvvJDrrrturyEZgC984QucddZZrF+/ntWrV+91jfnkyZP32nbq1KlMmjSJtWvX7mmbMGECr7/++p7l+v33vawxIpgwYQJPPvkkF110Effffz/nnXdey33zDVVJB62Pf/zjvPWtb+Wd73wnDz/88J727du373mDddmyZft9jSOOOII77riDc889l8mTJ3PmmWfS3d295wqap59+mhdeeGHP9i+++CKPP/4473nPe1i5ciXvfe97+fnPf86OHTuYP38+p512Gm9/+9tb7tuQ4R4R04EVwH8DXgeWZOaXIuIo4C6gG+gHPpKZP4nar6UvAfOBHcAVmfl0y5VKKtJYXkU2bdo0Pv3pT7+h/bOf/SyLFi3i1ltv5QMf+MCQr3PssceyevVqzj//fJYuXcpFF13EihUrmDNnDu9+97s58cQT92x78skns3z5cq666ipmzpzJNddcw/bt21mwYAE7d+4kM7ntttta7ltk5v43iJgKTM3MpyPiMGAN8CHgCuCVzLwpIhYDR2bm5yJiPvC71MJ9HvClzJw3yMsD0NPTkyXdrMNLIaXBbdy4kZNPPnmsyxgT/f39XHDBBaxfv37Y+zb6d4uINZnZ02j7IcfcM3Pz7jPvzPwZsBE4DlgALK82W04t8KnaV2TNE8AR1S8ISdIoGdaYe0R0A3OB7wPHZuZmqP0CiIhjqs2OA35Ut9tA1ba51WLHimfiktqhu7t7RGftI9H01TIR8RbgHuD3MvOn+9u0Qdsbxn4iojci+iKib+vWrc2WIakAQw0Ha28j+fdq6sw9IiZSC/avZ+a9VfPLETG1OmufCmyp2geA6XW7TwNealDsEmAJ1Mbch135AaDdc2JIB4NJkyaxbds2p/1t0u753CdNmjSs/Zq5WiaAO4CNmXlr3apVwCLgpurxgbr2ayPiTmpvqG7fPXwjSdOmTWNgYAD/Ym/e7jsxDUczZ+5nAJcD/xoRz1Rt11ML9bsj4krgReCSat2D1K6UeZbapZAfG1ZFkoo2ceLEYd1RSCMzZLhn5qM0HkcHOLvB9gl8qsW6JEktcPoBSSqQ4S5JBTLcJalAhrskFchwl6QCGe6SVCDncx9FzlEjabR45i5JBTLcJalAhrskFchwl6QCGe6SVCDDXZIKZLhLUoEMd0kqkOEuSQUaMtwjYmlEbImI9XVtd0XEM9VX/+47NEVEd0S8WrfuK50sXpLUWDPTDywD/hJYsbshM//n7ucRcQuwvW775zJzTrsKlCQNXzO32XskIrobratunv0R4APtLUuS1IpWx9zfB7ycmf9e13Z8RKyNiH+MiPcNtmNE9EZEX0T0eRd0SWqvVsN9IbCybnkzMCMz5wK/D3wjIg5vtGNmLsnMnszs6erqarEMSVK9EYd7REwAfhu4a3dbZv4iM7dVz9cAzwEntlqkJGl4WpnP/TeBH2bmwO6GiOgCXsnM1yLiBGAm8HyLNRbPed4ltVszl0KuBB4H3hERAxFxZbXqUvYekgF4P7AuIv4F+Dvg6sx8pZ0FS5KG1szVMgsHab+iQds9wD2tlyVJaoWfUJWkAhnuklQgw12SCmS4S1KBDHdJKpDhLkkFMtwlqUCGuyQVyHCXpAIZ7pJUIMNdkgrUyqyQxRlsdkZJGm88c5ekAhnuklQgw12SCmS4S1KBhnxDNSKWAhcAWzJzVtV2A/BJYGu12fWZ+WC17o+AK4HXgP+Vmd/uQN0HBW+/J2mkmjlzXwac16D9tsycU33tDvZTqN1+79erfW6PiEPaVawkqTlDhntmPgI0ex/UBcCdmfmLzHwBeBY4tYX6JEkj0MqY+7URsS4ilkbEkVXbccCP6rYZqNreICJ6I6IvIvq2bt3aaBNJ0giNNNy/DLwNmANsBm6p2qPBttnoBTJzSWb2ZGZPV1fXCMuQJDUyonDPzJcz87XMfB34Kv9/6GUAmF636TTgpdZKlCQN14jCPSKm1i1+GFhfPV8FXBoRvxoRxwMzgSdbK1GSNFzNXAq5EjgTmBIRA8AXgTMjYg61IZd+4CqAzNwQEXcDPwB2AZ/KzNc6U7okaTBDhntmLmzQfMd+tv9T4E9bKUqS1Bo/oSpJBTLcJalAhrskFchwl6QCGe6SVCDDXZIKZLhLUoEMd0kqkOEuSQUy3CWpQIa7JBXIcJekAhnuklQgw12SCmS4S1KBhgz36gbYWyJifV3bn0fED6sbZN8XEUdU7d0R8WpEPFN9faWTxUuSGmvmzH0ZcN4+bQ8BszJzNvBvwB/VrXsuM+dUX1e3p0xJ0nA0cyemRyKie5+2f6hbfAK4uL1ldVb34m+NdQmS1FHtGHP/OPD3dcvHR8TaiPjHiHhfG15fkjRMQ565709E/DG1G2F/vWraDMzIzG0R8d+B+yPi1zPzpw327QV6AWbMmNFKGZKkfYz4zD0iFgEXAL+TmQmQmb/IzG3V8zXAc8CJjfbPzCWZ2ZOZPV1dXSMtQ5LUwIjCPSLOAz4HXJiZO+rauyLikOr5CcBM4Pl2FCpJat6QwzIRsRI4E5gSEQPAF6ldHfOrwEMRAfBEdWXM+4EbI2IX8BpwdWa+0qHaJUmDaOZqmYUNmu8YZNt7gHtaLUr7t7+rffpv+q1RrETSgcpPqEpSgQx3SSqQ4S5JBTLcJalAhrskFchwl6QCGe6SVCDDXZIKZLhLUoFamhVSB57BPr3qJ1elg4tn7pJUIMNdkgpkuEtSgQx3SSqQ4S5JBTLcJalATYV7RCyNiC0Rsb6u7aiIeCgi/r16PLJqj4j4i4h4NiLWRcS7OlW8JKmxZq9zXwb8JbCirm0x8J3MvCkiFlfLnwPOp3bv1JnAPODL1aPGkNe/SweXps7cM/MRYN97oS4AllfPlwMfqmtfkTVPAEdExNR2FCtJak4rY+7HZuZmgOrxmKr9OOBHddsNVG17iYjeiOiLiL6tW7e2UIYkaV+deEM1GrTlGxoyl2RmT2b2dHV1daAMSTp4tRLuL+8ebqket1TtA8D0uu2mAS+1cBxJ0jC1Eu6rgEXV80XAA3XtH62umjkN2L57+EaSNDqaulomIlYCZwJTImIA+CJwE3B3RFwJvAhcUm3+IDAfeBbYAXyszTVLkobQVLhn5sJBVp3dYNsEPtVKUZKk1vgJVUkqkOEuSQUy3CWpQIa7JBXIcJekAhnuklQgw12SCmS4S1KBDHdJKpDhLkkFMtwlqUDN3mZPhfL2e1KZPHOXpAIZ7pJUIMNdkgpkuEtSgUb8hmpEvAO4q67pBOBPgCOATwJbq/brM/PBEVcoSRq2EYd7Zm4C5gBExCHAfwL3Ubut3m2ZeXNbKpQkDVu7hmXOBp7LzP9o0+tJklrQrnC/FFhZt3xtRKyLiKURcWSjHSKiNyL6IqJv69atjTaRJI1Qy+EeEW8CLgT+tmr6MvA2akM2m4FbGu2XmUsysycze7q6ulotQ5JUpx1n7ucDT2fmywCZ+XJmvpaZrwNfBU5twzEkScPQjnBfSN2QTERMrVv3YWB9G44hSRqGluaWiYhDgXOAq+qa/ywi5gAJ9O+zTpI0CloK98zcARy9T9vlLVUkSWqZn1CVpAIZ7pJUIMNdkgpU9M06BrsRhSSVruhw18h5hyZpfHNYRpIKZLhLUoEMd0kqkOEuSQUy3CWpQIa7JBXIcJekAhnuklQgw12SCmS4S1KBWp5+ICL6gZ8BrwG7MrMnIo4C7gK6qd2w4yOZ+ZNWjyVJak675pY5KzN/XLe8GPhOZt4UEYur5c+16VgaQ845I40PnRqWWQAsr54vBz7UoeNIkhpoR7gn8A8RsSYiequ2YzNzM0D1eMy+O0VEb0T0RUTf1q1b21CGJGm3dgzLnJGZL0XEMcBDEfHDZnbKzCXAEoCenp5sQx2SpErLZ+6Z+VL1uAW4DzgVeDkipgJUj1taPY4kqXkthXtETI6Iw3Y/B84F1gOrgEXVZouAB1o5jiRpeFodljkWuC8idr/WNzLz/0TEU8DdEXEl8CJwSYvHkSQNQ0vhnpnPA7/RoH0bcHYrry1JGjk/oSpJBfIG2WoLP9wkHVg8c5ekAhnuklQgw12SCmS4S1KBDHdJKpDhLkkFMtwlqUCGuyQVyHCXpAIZ7pJUIMNdkgpkuEtSgZw4TB3lhGLS2DDcdUDxl4HUHiMelomI6RHxvYjYGBEbIuLTVfsNEfGfEfFM9TW/feVKkprRypn7LuAPMvPp6j6qayLioWrdbZl5c+vlSZJGYsThnpmbgc3V859FxEbguHYVJkkaubaMuUdENzAX+D5wBnBtRHwU6KN2dv+TBvv0Ar0AM2bMaOn4g43TStLBquVLISPiLcA9wO9l5k+BLwNvA+ZQO7O/pdF+mbkkM3sys6erq6vVMiRJdVoK94iYSC3Yv56Z9wJk5suZ+Vpmvg58FTi19TIlScMx4mGZiAjgDmBjZt5a1z61Go8H+DCwvrUSVSKH0qTOamXM/QzgcuBfI+KZqu16YGFEzAES6AeuaqlCSdKwtXK1zKNANFj14MjLkSS1g3PLSFKBDHdJKpBzy0gV57VRSTxzl6QCGe6SVCDDXZIK5Ji7xoXhfujJcXId7Dxzl6QCGe6SVCDDXZIK5Ji7DjpOWqaDgeEuDcEPN2k8clhGkgrkmbuK5NCLDnaGu3SAcjhIrTDcpRFq118HhrU6oWPhHhHnAV8CDgH+JjNv6tSxJLWXfzWMfx0J94g4BPgr4BxgAHgqIlZl5g86cTxpPBvuXwDtfD9huGFt6I8fnTpzPxV4NjOfB4iIO4EFgOEuaVDtmkPoQPwlNNo1RWa2/0UjLgbOy8xPVMuXA/My89q6bXqB3mrxHcCmtheytynAjzt8jLFi38afUvsF9m00/VpmdjVa0akz90Y3zt7rt0hmLgGWdOj4bxARfZnZM1rHG032bfwptV9g3w4UnfoQ0wAwvW55GvBSh44lSdpHp8L9KWBmRBwfEW8CLgVWdehYkqR9dGRYJjN3RcS1wLepXQq5NDM3dOJYwzBqQ0BjwL6NP6X2C+zbAaEjb6hKksaWE4dJUoEMd0kqUHHhHhHnRcSmiHg2IhY3WP/7EfGDiFgXEd+JiF8bizpHYqi+1W13cURkRIyLS7aa6VdEfKT6vm2IiG+Mdo0j1cTP44yI+F5ErK1+JuePRZ3DFRFLI2JLRKwfZH1ExF9U/V4XEe8a7RpHqom+/U7Vp3UR8VhE/MZo19iUzCzmi9qbt88BJwBvAv4FOGWfbc4CDq2eXwPcNdZ1t6tv1XaHAY8ATwA9Y113m75nM4G1wJHV8jFjXXcb+7YEuKZ6fgrQP9Z1N9m39wPvAtYPsn4+8PfUPvNyGvD9sa65jX07ve5n8fwDtW+lnbnvmfYgM/8L2D3twR6Z+b3M3FEtPkHtGvzxYMi+Vf438GfAztEsrgXN9OuTwF9l5k8AMnPLKNc4Us30LYHDq+dvZZx8HiQzHwFe2c8mC4AVWfMEcERETB2d6lozVN8y87HdP4scwBlSWrgfB/yobnmgahvMldTOLsaDIfsWEXOB6Zn5zdEsrEXNfM9OBE6MiH+OiCeqGUfHg2b6dgNwWUQMAA8Cvzs6pXXccP8vjlcHbIaUNp/7kNMe7Nkw4jKgB/gfHa2offbbt4j4FeA24IrRKqhNmvmeTaA2NHMmtbOkf4qIWZn5fztcW6ua6dtCYFlm3hIR7wG+VvXt9c6X11FN/18cryLiLGrh/t6xrqWR0s7cm5r2ICJ+E/hj4MLM/MUo1daqofp2GDALeDgi+qmNc64aB2+qNvM9GwAeyMxfZuYL1CaZmzlK9bWimb5dCdwNkJmPA5OoTU413hU9BUlEzAb+BliQmdvGup5GSgv3Iac9qIYu/ppasI+XsVsYom+ZuT0zp2Rmd2Z2UxsLvDAz+8am3KY1M1XF/dTeCCciplAbpnl+VKscmWb69iJwNkBEnEwt3LeOapWdsQr4aHXVzGnA9szcPNZFtUNEzADuBS7PzH8b63oGU9SwTA4y7UFE3Aj0ZeYq4M+BtwB/GxEAL2bmhWNWdJOa7Nu402S/vg2cGxE/AF4D/vBAPVuq12Tf/gD4akR8htqwxRVZXYZxIIuIldSGyaZU7xd8EZgIkJlfofb+wXzgWWAH8LGxqXT4mujbnwBHA7dXGbIrD8CZIp1+QJIKVNqwjCQJw12SimS4S1KBDHdJKpDhLkkFMtwlqUCGuyQV6P8BBoNHwhDT4M0AAAAASUVORK5CYII=\n",
"text/plain": [
"
"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"markups = results.compute_markups(costs=costs)\n",
"plt.hist(markups, bins=50);\n",
"plt.legend([\"Markups\"]);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Mergers\n",
"\n",
"Before computing post-merger outputs, we'll supplement our pre-merger markups with some other outputs. We'll compute Herfindahl-Hirschman Indices, $\\text{HHI}$, with [`ProblemResults.compute_hhi`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_hhi.html#pyblp.ProblemResults.compute_hhi); population-normalized gross expected profits, $\\pi$, with [`ProblemResults.compute_profits`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_profits.html#pyblp.ProblemResults.compute_profits); and population-normalized consumer surpluses, $\\text{CS}$, with [`ProblemResults.compute_consumer_surpluses`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_consumer_surpluses.html#pyblp.ProblemResults.compute_consumer_surpluses)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"hhi = results.compute_hhi()\n",
"profits = results.compute_profits(costs=costs)\n",
"cs = results.compute_consumer_surpluses()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To compute post-merger outputs, we'll create a new set of firm IDs that represent a merger of firms ``2`` and ``1``."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"product_data['merger_ids'] = product_data['firm_ids'].replace(2, 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use [`ProblemResults.compute_approximate_prices`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_approximate_prices.html#pyblp.ProblemResults.compute_approximate_prices) or [`ProblemResults.compute_prices`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_prices.html#pyblp.ProblemResults.compute_prices) to estimate post-merger prices. The first method, which is discussed, for example, in [Nevo (1997)](https://pyblp.readthedocs.io/en/stable/references.html#nevo-1997), assumes that shares and their price derivatives are unaffected by the merger. The second method does not make these assumptions and iterates over the $\\zeta$-markup equation from [Morrow and Skerlos (2011)](https://pyblp.readthedocs.io/en/stable/references.html#morrow-and-skerlos-2011) to solve the full system of $J_t$ equations and $J_t$ unknowns in each market $t$. We'll use the latter, since it is fast enough for this example problem."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"changed_prices = results.compute_prices(\n",
" firm_ids=product_data['merger_ids'], \n",
" costs=costs\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We'll compute post-merger shares with [`ProblemResults.compute_shares`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.compute_shares.html#pyblp.ProblemResults.compute_shares)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"changed_shares = results.compute_shares(changed_prices)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Post-merger prices and shares are used to compute other post-merger outputs. For example, $\\text{HHI}$ increases."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAWoAAAD4CAYAAADFAawfAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy8QZhcZAAAQgElEQVR4nO3de5CddX3H8feXJGSRBInJ1gGX7WLrLYZ1E5YEJtERqIpJBXFQYSxNrbjj0HSIU6bGccYJDH+EjoXUQZmGSkOLJcMgmTrBC5fgBaaQi8SQC8RAw7iEO4KumJjQb//Ys8sS9nL2crI/ct6vmTN7zu/5ned8n9958slznss5kZlIksp11HgXIEkanEEtSYUzqCWpcAa1JBXOoJakwk2sxUxnzJiRLS0ttZi1JB2RNm/e/HxmNvY3rSZB3dLSwqZNm2oxa0k6IkXEEwNNc9eHJBXOoJakwhnUklS4muyjllSWAwcO0NnZyb59+8a7lLrX0NBAU1MTkyZNqvo5BrVUBzo7O5k6dSotLS1ExHiXU7cykxdeeIHOzk5OPvnkqp/nrg+pDuzbt4/p06cb0uMsIpg+ffqwP9lUFdQRcXxE3BYRj0TEzog4Y0RVSho3hnQZRvI+VLvr41+AH2XmBRFxNPCWYb+SJGlEhgzqiDgO+BDwNwCZ+Ufgj7UtS1IttSy7Y0znt2fFoiH7TJkyha6urt7Hq1evZtOmTVx33XUsX76cKVOmcPnll79WY+XCuRkzZrzhuT2efvppli5dysaNG5k8eTItLS2sXLmSvXv38o1vfIN169aNzQKOs2q2qN8JPAf8e0R8ANgMXJaZv+/bKSI6gA6A5ubmsa5TBRvoH301/3iPNI7F4ZOZnH/++SxevJg1a9YAsGXLFp555plxrmzsVbOPeiIwB7g+M2cDvweWHdopM1dlZntmtjc29nu5uiSNmXvvvZdJkybxpS99qbetra2ND37wgwB0dXVxwQUX8N73vpfPfe5z9Pya1ZVXXslpp53GrFmz6Ojo6G3/8Ic/zFe+8hXmzp3Lu9/9bn7+858D8Morr/CZz3yG1tZWPvvZzzJv3rzer8i48847OeOMM5gzZw6f/vSne7f6ly1bxsyZM2ltbX3dp4SRqmaLuhPozMwHK49vo5+glqTB/OEPf6Ctra338Ysvvsi5557b+/jaa6/l5ptv7n28d+/eQee3bds2Tj311AGnP/TQQ2zfvp0TTzyR+fPnc//997NgwQKWLFnC17/+dQAuvvhi1q1bxyc+8QkADh48yIYNG/jBD37AFVdcwd133823v/1tpk2bxtatW9m2bVvvMjz//PNcddVV3H333Rx77LFcffXVXHPNNSxZsoS1a9fyyCOPEBG89NJLwx+sQwy5RZ2ZTwO/joj3VJrOBnaM+pUl1ZVjjjmGLVu29N6uvPLK103/8pe//LrpJ5544qheb+7cuTQ1NXHUUUfR1tbGnj17gO4t8Xnz5nHKKaewfv16tm/f3vucT33qUwCceuqpvf3vu+8+LrzwQgBmzZpFa2srAA888AA7duxg/vz5tLW1cdNNN/HEE09w3HHH0dDQwCWXXMLtt9/OW94y+nMvqj3r4++B71bO+Hgc+PyoX1mSRuH9738/t91224DTJ0+e3Ht/woQJHDx4kH379nHppZeyadMmTjrpJJYvX/66c5p7ntPTH2CgHwDPTD7ykY9wyy23vGHahg0buOeee1izZg3XXXcd69evH9Ey9qjqPOrM3FLZ/9yamZ/MzN+M6lUlaZTOOuss9u/fzw033NDbtnHjRn76058O+JyeUJ4xYwZdXV2DBn2PBQsWcOuttwKwY8cOHn74YQBOP/107r//fnbv3g1078vetWsXXV1dvPzyyyxcuJCVK1eyZcuWES9jDy8hl+rQkXAWSkSwdu1ali5dyooVK2hoaOg9Pe/JJ5/s9znHH388X/ziFznllFNoaWnhtNNOG/J1Lr30UhYvXkxrayuzZ8+mtbWVt771rTQ2NrJ69Wouuugi9u/fD8BVV13F1KlTOe+889i3bx+ZybXXXjv6ZR1os3402tvb0x8OqB+ekvaaUsdi586dvO997xvXGt6sXn31VQ4cOEBDQwOPPfYYZ599Nrt27eLoo48e8Tz7ez8iYnNmtvfX3y1qSRrEK6+8wplnnsmBAwfITK6//vpRhfRIGNSSNIipU6eO+08L+u15Up2oxW5ODd9I3geDWqoDDQ0NvPDCC4b1OOv5PuqGhoZhPc9dH1IdaGpqorOzk+eee268S6l7Pb/wMhwGtVQHJk2aNKxfFFFZ3PUhSYUzqCWpcAa1JBXOoJakwhnUklQ4g1qSCmdQS1LhDGpJKpxBLUmFM6glqXAGtSQVzqCWpMIZ1JJUOINakgpnUEtS4QxqSSqcQS1JhavqF14iYg/wO+BV4GBmtteyKEnSa4bzU1xnZubzNatEktQvd31IUuGq3aJO4M6ISOBfM3PVoR0iogPoAGhubh67CnVEall2x7D671mxqEaVqFoDvWe+N7VX7Rb1/MycA3wc+LuI+NChHTJzVWa2Z2Z7Y2PjmBYpSfWsqqDOzL2Vv88Ca4G5tSxKkvSaIYM6Io6NiKk994GPAttqXZgkqVs1+6jfDqyNiJ7+/5WZP6ppVZKkXkMGdWY+DnzgMNQiSeqHp+dJUuEMakkqnEEtSYUzqCWpcAa1JBXOoJakwhnUklQ4g1qSCmdQS1LhDGpJKpxBLUmFM6glqXAGtSQVzqCWpMIZ1JJUOINakgpnUEtS4QxqSSqcQS1JhTOoJalwBrUkFc6glqTCGdSSVDiDWpIKZ1BLUuEMakkqXNVBHRETIuKhiFhXy4IkSa83nC3qy4CdtSpEktS/qoI6IpqARcC/1bYcSdKhJlbZbyXwj8DUgTpERAfQAdDc3Dz6ylRzLcvuGO8SRm2gZdizYtGY9JdKMOQWdUT8JfBsZm4erF9mrsrM9sxsb2xsHLMCJaneVbPrYz5wbkTsAdYAZ0XEzTWtSpLUa8igzsyvZmZTZrYAFwLrM/Oval6ZJAnwPGpJKl61BxMByMyfAD+pSSWSpH65RS1JhTOoJalwBrUkFc6glqTCGdSSVDiDWpIKZ1BLUuEMakkqnEEtSYUzqCWpcAa1JBXOoJakwhnUklQ4g1qSCmdQS1LhDGpJKpxBLUmFM6glqXAGtSQVzqCWpMIZ1JJUOINakgpnUEtS4QxqSSqcQS1JhRsyqCOiISI2RMQvI2J7RFxxOAqTJHWbWEWf/cBZmdkVEZOA+yLih5n5QI1rkyRRRVBnZgJdlYeTKresZVGSpNdUs0VNREwANgN/DnwrMx/sp08H0AHQ3Nw8ljXqEC3L7hhW/z0rFtWoknINd4zeTGM6UK31+D7Xi6oOJmbmq5nZBjQBcyNiVj99VmVme2a2NzY2jnWdklS3hnXWR2a+BPwEOKcm1UiS3qCasz4aI+L4yv1jgL8AHql1YZKkbtXsoz4BuKmyn/oo4NbMXFfbsiRJPao562MrMPsw1CJJ6odXJkpS4QxqSSqcQS1JhTOoJalwBrUkFc6glqTCGdSSVDiDWpIKZ1BLUuEMakkqnEEtSYUzqCWpcAa1JBXOoJakwhnUklQ4g1qSCmdQS1LhDGpJKpxBLUmFM6glqXAGtSQVzqCWpMIZ1JJUOINakgpnUEtS4QxqSSrckEEdESdFxL0RsTMitkfEZYejMElSt4lV9DkI/ENm/iIipgKbI+KuzNxR49okSVSxRZ2ZT2XmLyr3fwfsBN5R68IkSd2q2aLuFREtwGzgwX6mdQAdAM3NzWNQmlqW3THeJYzKm73+w2Esx2igee1ZsWjMXqOWrzvYWNR6GQYyXmN6qKoPJkbEFOB7wNLM/O2h0zNzVWa2Z2Z7Y2PjWNYoSXWtqqCOiEl0h/R3M/P22pYkSeqrmrM+AvgOsDMzr6l9SZKkvqrZop4PXAycFRFbKreFNa5LklQx5MHEzLwPiMNQiySpH16ZKEmFM6glqXAGtSQVzqCWpMIZ1JJUOINakgpnUEtS4QxqSSqcQS1JhTOoJalwBrUkFc6glqTCGdSSVDiDWpIKZ1BLUuEMakkqnEEtSYUzqCWpcAa1JBXOoJakwhnUklQ4g1qSCmdQS1LhDGpJKpxBLUmFGzKoI+LGiHg2IrYdjoIkSa9XzRb1auCcGtchSRrAkEGdmT8DXjwMtUiS+jFxrGYUER1AB0Bzc/OI59Oy7I5h9d+zYtGIX6sWhls/1H4ZRlKTBjfcMS3xPah1TQPNfyzX9/HKi8OxbH2N2cHEzFyVme2Z2d7Y2DhWs5WkuudZH5JUOINakgpXzel5twD/A7wnIjoj4gu1L0uS1GPIg4mZedHhKESS1D93fUhS4QxqSSqcQS1JhTOoJalwBrUkFc6glqTCGdSSVDiDWpIKZ1BLUuEMakkqnEEtSYUzqCWpcAa1JBXOoJakwhnUklQ4g1qSCmdQS1LhDGpJKpxBLUmFM6glqXAGtSQVzqCWpMIZ1JJUOINakgpnUEtS4QxqSSpcVUEdEedExKMRsTsiltW6KEnSa4YM6oiYAHwL+DgwE7goImbWujBJUrdqtqjnArsz8/HM/COwBjivtmVJknpEZg7eIeIC4JzMvKTy+GJgXmYuOaRfB9BRefge4NGxL7coM4Dnx7uIAjku/XNcBubYdPvTzGzsb8LEKp4c/bS9Id0zcxWwapiFvWlFxKbMbB/vOkrjuPTPcRmYYzO0anZ9dAIn9XncBOytTTmSpENVE9QbgXdFxMkRcTRwIfD92pYlSeox5K6PzDwYEUuAHwMTgBszc3vNKytf3ezmGSbHpX+Oy8AcmyEMeTBRkjS+vDJRkgpnUEtS4QzqPiLixoh4NiK29Wl7W0TcFRG/qvydVmmPiPhm5bL6rRExp89zFlf6/yoiFo/HsoylAcZleUQ8GRFbKreFfaZ9tTIuj0bEx/q0H1FfRRARJ0XEvRGxMyK2R8Rllfa6XmcGGZe6X2dGLDO9VW7Ah4A5wLY+bf8ELKvcXwZcXbm/EPgh3eeZnw48WGl/G/B45e+0yv1p471sNRiX5cDl/fSdCfwSmAycDDxG90HoCZX77wSOrvSZOd7LNspxOQGYU7k/FdhVWf66XmcGGZe6X2dGenOLuo/M/Bnw4iHN5wE3Ve7fBHyyT/t/ZLcHgOMj4gTgY8BdmfliZv4GuAs4p/bV184A4zKQ84A1mbk/M/8X2E331xAccV9FkJlPZeYvKvd/B+wE3kGdrzODjMtA6madGSmDemhvz8ynoHsFBP6k0v4O4Nd9+nVW2gZqPxItqXyEv7Hn4z11Oi4R0QLMBh7EdabXIeMCrjMjYlCP3ECX1ld1yf0R4Hrgz4A24CngnyvtdTcuETEF+B6wNDN/O1jXftqO2LHpZ1xcZ0bIoB7aM5WPp1T+PltpH+jS+rq45D4zn8nMVzPz/4Ab6P6YCnU2LhExie4w+m5m3l5prvt1pr9xcZ0ZOYN6aN8Heo7CLwb+u0/7X1eO5J8OvFz5mPtj4KMRMa3y0e6jlbYjSk8QVZwP9JwR8n3gwoiYHBEnA+8CNnAEfhVBRATwHWBnZl7TZ1JdrzMDjYvrzCiM99HMkm7ALXR/JDtA9//mXwCmA/cAv6r8fVulb9D9gwqPAQ8D7X3m87d0HxDZDXx+vJerRuPyn5Xl3kr3P54T+vT/WmVcHgU+3qd9Id1nADwGfG28l2sMxmUB3R/FtwJbKreF9b7ODDIudb/OjPTmJeSSVDh3fUhS4QxqSSqcQS1JhTOoJalwBrUkFc6glqTCGdSSVLj/B3k6xfy96drSAAAAAElFTkSuQmCC\n",
"text/plain": [
"
"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"changed_cs = results.compute_consumer_surpluses(changed_prices)\n",
"plt.hist(changed_cs - cs, bins=50);\n",
"plt.legend([\"Consumer Surplus Changes\"]);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Bootstrapping Results\n",
"\n",
"Post-estimation outputs can be informative, but they don't mean much without a sense sample-to-sample variability. One way to estimate confidence intervals for post-estimation outputs is with a standard bootstrap procedure:\n",
"\n",
"1. Construct a large number of bootstrap samples by sampling with replacement from the original product data.\n",
"2. Initialize and solve a [`Problem`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.Problem.html#pyblp.Problem) for each bootstrap sample.\n",
"3. Compute the desired post-estimation output for each bootstrapped [`ProblemResults`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.html#pyblp.ProblemResults) and from the resulting empirical distribution, construct boostrap confidence intervals.\n",
"\n",
"Although appealing because of its simplicity, the computational resources required for this procedure are often prohibitively expensive. Furthermore, human oversight of the optimization routine is often required to determine whether the routine ran into any problems and if it successfully converged. Human oversight of estimation for each bootstrapped problem is usually not feasible.\n",
"\n",
"A more reasonable alternative is a parametric bootstrap procedure:\n",
"\n",
"1. Construct a large number of draws from the estimated joint distribution of parameters.\n",
"2. Compute the implied mean utility, $\\delta$, and shares, $s$, for each draw. If a supply side was estimated, also computed the implied marginal costs, $c$, and prices, $p$.\n",
"3. Compute the desired post-estimation output under each of these parametric bootstrap samples. Again, from the resulting empirical distribution, construct boostrap confidence intervals.\n",
"\n",
"Compared to the standard bootstrap procedure, the parametric bootstrap requires far fewer computational resources, and is simple enough to not require human oversight of each bootstrap iteration. The primary complication to this procedure is that when supply is estimated, equilibrium prices and shares need to be computed for each parametric bootstrap sample by iterating over the $\\zeta$-markup equation from [Morrow and Skerlos (2011)](https://pyblp.readthedocs.io/en/stable/references.html#morrow-and-skerlos-2011). Although nontrivial, this fixed point iteration problem is much less demanding than the full optimization routine required to solve the BLP problem from the start.\n",
"\n",
"An empirical distribution of results computed according to this parametric bootstrap procedure can be created with the [`ProblemResults.bootstrap`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.bootstrap.html#pyblp.ProblemResults.bootstrap) method, which returns a [`BootstrappedResults`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.BootstrappedResults.html#pyblp.BootstrappedResults) class that can be used just like [`ProblemResults`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.ProblemResults.html#pyblp.ProblemResults) to compute various post-estimation outputs. The difference is that [`BootstrappedResults`](https://pyblp.readthedocs.io/en/stable/_api/pyblp.BootstrappedResults.html#pyblp.BootstrappedResults) methods return arrays with an extra first dimension, along which bootstrapped results are stacked.\n",
"\n",
"We'll construct 90% parametric bootstrap confidence intervals for estimated mean own elasticities in each market of the fake cereal problem. Usually, bootstrapped confidence intervals should be based on thousands of draws, but we'll only use a few for the sake of speed in this example."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Bootstrapped Results Summary:\n",
"======================\n",
"Computation Bootstrap\n",
" Time Draws \n",
"----------- ---------\n",
" 00:00:38 100 \n",
"======================"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"bootstrapped_results = results.bootstrap(draws=100, seed=0)\n",
"bootstrapped_results"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
')
.appendTo($('#searchbox'));
}
},
/**
* init the domain index toggle buttons
*/
initIndexTable : function() {
var togglers = $('img.toggler').click(function() {
var src = $(this).attr('src');
var idnum = $(this).attr('id').substr(7);
$('tr.cg-' + idnum).toggle();
if (src.substr(-9) === 'minus.png')
$(this).attr('src', src.substr(0, src.length-9) + 'plus.png');
else
$(this).attr('src', src.substr(0, src.length-8) + 'minus.png');
}).css('display', '');
if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) {
togglers.click();
}
},
/**
* helper function to hide the search marks again
*/
hideSearchWords : function() {
$('#searchbox .highlight-link').fadeOut(300);
$('span.highlighted').removeClass('highlighted');
},
/**
* make the url absolute
*/
makeURL : function(relativeURL) {
return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL;
},
/**
* get the current relative url
*/
getCurrentURL : function() {
var path = document.location.pathname;
var parts = path.split(/\//);
$.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() {
if (this === '..')
parts.pop();
});
var url = parts.join('/');
return path.substring(url.lastIndexOf('/') + 1, path.length - 1);
},
initOnKeyListeners: function() {
$(document).keydown(function(event) {
var activeElementType = document.activeElement.tagName;
// don't navigate when in search box or textarea
if (activeElementType !== 'TEXTAREA' && activeElementType !== 'INPUT' && activeElementType !== 'SELECT'
&& !event.altKey && !event.ctrlKey && !event.metaKey && !event.shiftKey) {
switch (event.keyCode) {
case 37: // left
var prevHref = $('link[rel="prev"]').prop('href');
if (prevHref) {
window.location.href = prevHref;
return false;
}
case 39: // right
var nextHref = $('link[rel="next"]').prop('href');
if (nextHref) {
window.location.href = nextHref;
return false;
}
}
}
});
}
};
// quick alias for translations
_ = Documentation.gettext;
$(document).ready(function() {
Documentation.init();
});
PKFsPS[Spyblp-stable/_static/file.pngPNG
IHDRaIDATxR){l ۶f=@:3~箄rX$AX-D ~
ǉ(P%8<<9::
PO&$l~X&EW^4wQ}^ͣ
i0/H/@F)Dzq+j[SU5h/oY G&Lfs|{3%U+S`AFIENDB`PKFsP%ZZZpyblp-stable/_static/plus.pngPNG
IHDR(!IDATxc8g>@;([[U
@l-!a@IENDB`PKFsPZZpyblp-stable/_static/minus.pngPNG
IHDR(!IDATxc8g>@;(!&]f2nNIENDB`PKsP
p pyblp-stable/_static/override.js// wrap API links created in notebooks with missing code formatting
$(document).ready(function() {
$('.document a[href*="pyblp."]:has(span):not(:has(code))').wrapInner('');
});
PKsP5$$!pyblp-stable/_static/preamble.tex\usepackage{mathrsfs}
\usepackage{pdflscape}
\usepackage{changepage}
\newenvironment{examplenotebook}{\adjustwidth{-2em}{}}{\endadjustwidth}
\definecolor{nbsphinxin}{HTML}{FFFFFF}
\definecolor{nbsphinxout}{HTML}{FFFFFF}
\addto\captionsenglish{\renewcommand{\contentsname}{Table of Contents}}
PKFsP,G,G$pyblp-stable/_static/jquery-3.4.1.js/*!
* jQuery JavaScript Library v3.4.1
* https://jquery.com/
*
* Includes Sizzle.js
* https://sizzlejs.com/
*
* Copyright JS Foundation and other contributors
* Released under the MIT license
* https://jquery.org/license
*
* Date: 2019-05-01T21:04Z
*/
( function( global, factory ) {
"use strict";
if ( typeof module === "object" && typeof module.exports === "object" ) {
// For CommonJS and CommonJS-like environments where a proper `window`
// is present, execute the factory and get jQuery.
// For environments that do not have a `window` with a `document`
// (such as Node.js), expose a factory as module.exports.
// This accentuates the need for the creation of a real `window`.
// e.g. var jQuery = require("jquery")(window);
// See ticket #14549 for more info.
module.exports = global.document ?
factory( global, true ) :
function( w ) {
if ( !w.document ) {
throw new Error( "jQuery requires a window with a document" );
}
return factory( w );
};
} else {
factory( global );
}
// Pass this if window is not defined yet
} )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) {
// Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1
// throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode
// arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common
// enough that all such attempts are guarded in a try block.
"use strict";
var arr = [];
var document = window.document;
var getProto = Object.getPrototypeOf;
var slice = arr.slice;
var concat = arr.concat;
var push = arr.push;
var indexOf = arr.indexOf;
var class2type = {};
var toString = class2type.toString;
var hasOwn = class2type.hasOwnProperty;
var fnToString = hasOwn.toString;
var ObjectFunctionString = fnToString.call( Object );
var support = {};
var isFunction = function isFunction( obj ) {
// Support: Chrome <=57, Firefox <=52
// In some browsers, typeof returns "function" for HTML