\n",
"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"

\n",
"

"
],
"text/plain": [
" market_ids city_ids quarter product_ids firm_ids brand_ids shares \\\n",
"0 C01Q1 1 1 F1B04 1 4 0.012417 \n",
"1 C01Q1 1 1 F1B06 1 6 0.007809 \n",
"2 C01Q1 1 1 F1B07 1 7 0.012995 \n",
"3 C01Q1 1 1 F1B09 1 9 0.005770 \n",
"4 C01Q1 1 1 F1B11 1 11 0.017934 \n",
"\n",
" prices sugar mushy ... demand_instruments10 demand_instruments11 \\\n",
"0 0.072088 2 1 ... 2.116358 -0.154708 \n",
"1 0.114178 18 1 ... -7.374091 -0.576412 \n",
"2 0.132391 4 1 ... 2.187872 -0.207346 \n",
"3 0.130344 3 0 ... 2.704576 0.040748 \n",
"4 0.154823 12 0 ... 1.261242 0.034836 \n",
"\n",
" demand_instruments12 demand_instruments13 demand_instruments14 \\\n",
"0 -0.005796 0.014538 0.126244 \n",
"1 0.012991 0.076143 0.029736 \n",
"2 0.003509 0.091781 0.163773 \n",
"3 -0.003724 0.094732 0.135274 \n",
"4 -0.000568 0.102451 0.130640 \n",
"\n",
" demand_instruments15 demand_instruments16 demand_instruments17 \\\n",
"0 0.067345 0.068423 0.034800 \n",
"1 0.087867 0.110501 0.087784 \n",
"2 0.111881 0.108226 0.086439 \n",
"3 0.088090 0.101767 0.101777 \n",
"4 0.084818 0.101075 0.125169 \n",
"\n",
" demand_instruments18 demand_instruments19 \n",
"0 0.126346 0.035484 \n",
"1 0.049872 0.072579 \n",
"2 0.122347 0.101842 \n",
"3 0.110741 0.104332 \n",
"4 0.133464 0.121111 \n",
"\n",
"[5 rows x 30 columns]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"product_data = pd.read_csv(pyblp.data.NEVO_PRODUCTS_LOCATION)\n",
"product_data.head()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The product data contains `market_ids`, `product_ids`, `firm_ids`, `shares`, `prices`, a number of other IDs and product characteristics, and some pre-computed excluded `demand_instruments0`, `demand_instruments1`, and so on. The `product_ids` will be incorporated as fixed effects. \n",
"\n",
"For more information about the instruments and the example data as a whole, refer to the [`data`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.data.html#module-pyblp.data) module."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Setting Up the Problem\n",
"\n",
"We can combine the [`Formulation`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.Formulation.html#pyblp.Formulation) and `product_data` to construct a [`Problem`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.Problem.html#pyblp.Problem). We pass the [`Formulation`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.Formulation.html#pyblp.Formulation) first and the `product_data` second. We can also display the properties of the problem by typing its name. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"prices + Absorb[C(product_ids)]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"logit_formulation = pyblp.Formulation('prices', absorb='C(product_ids)')\n",
"logit_formulation"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dimensions:\n",
"================================\n",
" T N F K1 MD ED \n",
"--- ---- --- ---- ---- ----\n",
"94 2256 5 1 20 1 \n",
"================================\n",
"\n",
"Formulations:\n",
"==================================\n",
" Column Indices: 0 \n",
"-------------------------- ------\n",
"X1: Linear Characteristics prices\n",
"=================================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"problem = pyblp.Problem(logit_formulation, product_data)\n",
"problem"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Two sets of properties are displayed:\n",
"\n",
"1. Dimensions of the data.\n",
"2. Formulations of the problem.\n",
"\n",
"The dimensions describe the shapes of matrices as laid out in [Notation](https://pyblp.readthedocs.io/en/latest/notation.html#notation). They include:\n",
"\n",
"- $T$ is the number of markets.\n",
"- $N$ is the length of the dataset (the number of products across all markets).\n",
"- $F$ is the number of firms, which we won't use in this example.\n",
"- $K_1$ is the dimension of the linear demand parameters.\n",
"- $M_D$ is the dimension of the instrument variables (excluded instruments and exogenous regressors).\n",
"- $E_D$ is the number of fixed effect dimensions (one-dimensional fixed effects, two-dimensional fixed effects, etc.).\n",
"\n",
"There is only a single [`Formulation`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.Formulation.html#pyblp.Formulation) for this model. \n",
"\n",
"- $X_1$ is the linear component of utility for demand and depends only on prices (after the fixed effects are removed)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Solving the Problem\n",
"\n",
"The [`Problem.solve`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.Problem.solve.html#pyblp.Problem.solve) method always returns a [`ProblemResults`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.ProblemResults.html#pyblp.ProblemResults) class, which can be used to compute post-estimation outputs. See the [post estimation](post_estimation.ipynb) tutorial for more information."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"==========================================\n",
"GMM Objective Clipped Weighting Matrix\n",
"Step Value Shares Condition Number\n",
"---- --------- ------- ----------------\n",
" 2 +1.9E+02 0 +5.7E+07 \n",
"==========================================\n",
"\n",
"Cumulative Statistics:\n",
"========================\n",
"Computation Objective \n",
" Time Evaluations\n",
"----------- -----------\n",
" 00:00:00 2 \n",
"========================\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"==========\n",
" prices \n",
"----------\n",
" -3.0E+01 \n",
"(+1.0E+00)\n",
"=========="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"logit_results = problem.solve()\n",
"logit_results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Theory of Nested Logit\n",
"\n",
"We can extend the logit model to allow for correlation within a group $h$ so that\n",
"\n",
"$$U_{ijt} = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt} + \\bar{\\epsilon}_{h(j)ti} + (1 - \\rho) \\bar{\\epsilon}_{ijt}.$$\n",
"\n",
"Now, we require that $\\epsilon_{jti} = \\bar{\\epsilon}_{h(j)ti} + (1 - \\rho) \\bar{\\epsilon}_{jti}$ is distributed IID with the Type I Extreme Value (Gumbel) distribution. As $\\rho \\rightarrow 1$, all consumers stay within their group. As $\\rho \\rightarrow 0$, this collapses to the IIA logit. Note that if we wanted, we could allow $\\rho$ to differ between groups with the notation $\\rho_{h(j)}$.\n",
"\n",
"This gives us aggregate market shares as the product of two logits, the within group logit and the across group logit:\n",
"\n",
"$$s_{jt} = \\frac{\\exp[V_{jt} / (1 - \\rho)]}{\\exp[V_{h(j)t} / (1 - \\rho)]}\\cdot\\frac{\\exp V_{h(j)t}}{1 + \\sum_h \\exp V_{ht}},$$\n",
"\n",
"where $V_{jt} = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt}$.\n",
"\n",
"After some work we again obtain the linear estimating equation:\n",
"\n",
"$$\\log s_{jt} - \\log s_{0t} = \\alpha p_{jt}+ x_{jt} \\beta^\\text{ex} +\\rho \\log s_{j|h(j)t} + \\xi_{jt},$$\n",
"\n",
"where $s_{j|h(j)t} = s_{jt} / s_{h(j)t}$ and $s_{h(j)t}$ is the share of group $h$ in market $t$. See [Berry (1994)](https://pyblp.readthedocs.io/en/latest/references.html#berry-1994) or [Cardell (1997)](https://pyblp.readthedocs.io/en/latest/references.html#cardell-1997) for more information.\n",
"\n",
"Again, the left hand side is data, though the $\\ln s_{j|h(j)t}$ is clearly endogenous which means we must instrument for it. Rather than include $\\ln s_{j|h(j)t}$ along with the linear components of utility, $X_1$, whenever `nesting_ids` are included in `product_data`, $\\rho$ is treated as a nonlinear $X_2$ parameter. This means that the linear component is given instead by\n",
"\n",
"$$\\log s_{jt} - \\log s_{0t} - \\rho \\log s_{j|h(j)t} = \\alpha p_{jt} + x_{jt} \\beta^\\text{ex} + \\xi_{jt}.$$\n",
"\n",
"This is done for two reasons:\n",
"\n",
"1. It forces the user to treat $\\rho$ as an endogenous parameter.\n",
"2. It extends much more easily to the RCNL model of [Brenkers and Verboven (2006)](https://pyblp.readthedocs.io/en/latest/references.html#brenkers-and-verboven-2006).\n",
"\n",
"A common choice for an additional instrument is the number of products per nest."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Application of Nested Logit\n",
"\n",
"By including `nesting_ids` (another reserved name) as a field in `product_data`, we tell the package to estimate a nested logit model, and we don't need to change any of the formulas. We show how to construct the category groupings in two different ways:\n",
"\n",
"1. We put all products in a single nest (only the outside good in the other nest).\n",
"2. We put products into two nests (either mushy or non-mushy).\n",
"\n",
"We also construct an additional instrument based on the number of products per nest. Typically this is useful as a source of exogenous variation in the within group share $\\ln s_{j|h(j)t}$. However, in this example because the number of products per nest does not vary across markets, if we include product fixed effects, this instrument is irrelevant.\n",
"\n",
"We'll define a function that constructs the additional instrument and solves the nested logit problem. We'll exclude product ID fixed effects, which are collinear with `mushy`, and we'll choose $\\rho = 0.7$ as the initial value at which the optimization routine will start."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"def solve_nl(df):\n",
" groups = df.groupby(['market_ids', 'nesting_ids'])\n",
" df['demand_instruments20'] = groups['shares'].transform(np.size)\n",
" nl_formulation = pyblp.Formulation('0 + prices')\n",
" problem = pyblp.Problem(nl_formulation, df)\n",
" return problem.solve(rho=0.7)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"======================================================================================\n",
"GMM Objective Projected Reduced Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Gradient Norm Hessian Shares Condition Number Condition Number \n",
"---- --------- ------------- -------- ------- ---------------- -----------------\n",
" 2 +2.0E+02 +7.9E-10 +1.1E+04 0 +2.0E+09 +3.0E+04 \n",
"======================================================================================\n",
"\n",
"Cumulative Statistics:\n",
"=================================================\n",
"Computation Optimizer Optimization Objective \n",
" Time Converged Iterations Evaluations\n",
"----------- --------- ------------ -----------\n",
" 00:00:05 Yes 3 8 \n",
"=================================================\n",
"\n",
"Rho Estimates (Robust SEs in Parentheses):\n",
"==========\n",
"All Groups\n",
"----------\n",
" +9.8E-01 \n",
"(+1.4E-02)\n",
"==========\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"==========\n",
" prices \n",
"----------\n",
" -1.2E+00 \n",
"(+4.0E-01)\n",
"=========="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df1 = product_data.copy()\n",
"df1['nesting_ids'] = 1\n",
"nl_results1 = solve_nl(df1)\n",
"nl_results1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When we inspect the [`Problem`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.Problem.html#pyblp.Problem), the only changes from the plain logit model is the additional instrument that contributes to $M_D$ and the inclusion of $H$, the number of nesting categories."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Dimensions:\n",
"===============================\n",
" T N F K1 MD H \n",
"--- ---- --- ---- ---- ---\n",
"94 2256 5 1 21 1 \n",
"===============================\n",
"\n",
"Formulations:\n",
"==================================\n",
" Column Indices: 0 \n",
"-------------------------- ------\n",
"X1: Linear Characteristics prices\n",
"=================================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nl_results1.problem"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll solve the problem when there are two nests for mushy and non-mushy."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"======================================================================================\n",
"GMM Objective Projected Reduced Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Gradient Norm Hessian Shares Condition Number Condition Number \n",
"---- --------- ------------- -------- ------- ---------------- -----------------\n",
" 2 +6.9E+02 +8.2E-09 +5.6E+03 0 +5.1E+08 +2.0E+04 \n",
"======================================================================================\n",
"\n",
"Cumulative Statistics:\n",
"=================================================\n",
"Computation Optimizer Optimization Objective \n",
" Time Converged Iterations Evaluations\n",
"----------- --------- ------------ -----------\n",
" 00:00:06 Yes 4 13 \n",
"=================================================\n",
"\n",
"Rho Estimates (Robust SEs in Parentheses):\n",
"==========\n",
"All Groups\n",
"----------\n",
" +8.9E-01 \n",
"(+1.9E-02)\n",
"==========\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"==========\n",
" prices \n",
"----------\n",
" -7.8E+00 \n",
"(+4.8E-01)\n",
"=========="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df2 = product_data.copy()\n",
"df2['nesting_ids'] = df2['mushy']\n",
"nl_results2 = solve_nl(df2)\n",
"nl_results2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For both cases we find that $\\hat{\\rho} > 0.8$.\n",
"\n",
"Finally, we'll also look at the adjusted parameter on prices, $\\alpha / (1-\\rho)$."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[-67.39338888]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nl_results1.beta[0] / (1 - nl_results1.rho)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([[-72.27074638]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nl_results2.beta[0] / (1 - nl_results2.rho)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Treating Within Group Shares as Exogenous\n",
"\n",
"The package is designed to prevent the user from treating the within group share, $\\log s_{j|h(j)t}$, as an exogenous variable. For example, if we were to compute a `group_share` variable and use the algebraic functionality of [`Formulation`](https://pyblp.readthedocs.io/en/latest/_api/pyblp.Formulation.html#pyblp.Formulation) by including the expression `log(shares / group_share)` in our formula for $X_1$, the package would raise an error because the package knows that `shares` should not be included in this formulation.\n",
"\n",
"To demonstrate why this is a bad idea, we'll override this feature by calculating $\\log s_{j|h(j)t}$ and including it as an additional variable in $X_1$. To do so, we'll first re-define our function for setting up and solving the nested logit problem."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"def solve_nl2(df):\n",
" groups = df.groupby(['market_ids', 'nesting_ids'])\n",
" df['group_share'] = groups['shares'].transform(np.sum)\n",
" df['within_share'] = df['shares'] / df['group_share']\n",
" df['demand_instruments20'] = groups['shares'].transform(np.size)\n",
" nl2_formulation = pyblp.Formulation('0 + prices + log(within_share)')\n",
" problem = pyblp.Problem(nl2_formulation, df.drop(columns=['nesting_ids']))\n",
" return problem.solve()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Again, we'll solve the problem when there's a single nest for all products, with the outside good in its own nest."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"=============================================================\n",
"GMM Objective Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Shares Condition Number Condition Number \n",
"---- --------- ------- ---------------- -----------------\n",
" 2 +2.0E+02 0 +2.1E+09 +1.1E+04 \n",
"=============================================================\n",
"\n",
"Cumulative Statistics:\n",
"========================\n",
"Computation Objective \n",
" Time Evaluations\n",
"----------- -----------\n",
" 00:00:00 2 \n",
"========================\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"=============================\n",
" prices log(within_share)\n",
"---------- -----------------\n",
" -1.0E+00 +9.9E-01 \n",
"(+2.4E-01) (+7.9E-03) \n",
"============================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nl2_results1 = solve_nl2(df1)\n",
"nl2_results1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And again, we'll solve the problem when there are two nests for mushy and non-mushy."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Problem Results Summary:\n",
"=============================================================\n",
"GMM Objective Clipped Weighting Matrix Covariance Matrix\n",
"Step Value Shares Condition Number Condition Number \n",
"---- --------- ------- ---------------- -----------------\n",
" 2 +7.0E+02 0 +5.5E+08 +7.7E+03 \n",
"=============================================================\n",
"\n",
"Cumulative Statistics:\n",
"========================\n",
"Computation Objective \n",
" Time Evaluations\n",
"----------- -----------\n",
" 00:00:00 2 \n",
"========================\n",
"\n",
"Beta Estimates (Robust SEs in Parentheses):\n",
"=============================\n",
" prices log(within_share)\n",
"---------- -----------------\n",
" -6.8E+00 +9.3E-01 \n",
"(+2.9E-01) (+1.1E-02) \n",
"============================="
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nl2_results2 = solve_nl2(df2)\n",
"nl2_results2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"One can observe that we obtain parameter estimates which are quite different than above."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([-86.37368446])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nl2_results1.beta[0] / (1 - nl2_results1.beta[1])"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([-100.14496891])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nl2_results2.beta[0] / (1 - nl2_results2.beta[1])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
},
"pycharm": {
"stem_cell": {
"cell_type": "raw",
"metadata": {
"collapsed": false
},
"source": []
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}market_ids | city_ids | quarter | product_ids | firm_ids | brand_ids | shares | prices | sugar | mushy | ... | demand_instruments10 | demand_instruments11 | demand_instruments12 | demand_instruments13 | demand_instruments14 | demand_instruments15 | demand_instruments16 | demand_instruments17 | demand_instruments18 | demand_instruments19 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

0 | C01Q1 | 1 | 1 | F1B04 | 1 | 4 | 0.012417 | 0.072088 | 2 | 1 | ... | 2.116358 | -0.154708 | -0.005796 | 0.014538 | 0.126244 | 0.067345 | 0.068423 | 0.034800 | 0.126346 | 0.035484 |

1 | C01Q1 | 1 | 1 | F1B06 | 1 | 6 | 0.007809 | 0.114178 | 18 | 1 | ... | -7.374091 | -0.576412 | 0.012991 | 0.076143 | 0.029736 | 0.087867 | 0.110501 | 0.087784 | 0.049872 | 0.072579 |

2 | C01Q1 | 1 | 1 | F1B07 | 1 | 7 | 0.012995 | 0.132391 | 4 | 1 | ... | 2.187872 | -0.207346 | 0.003509 | 0.091781 | 0.163773 | 0.111881 | 0.108226 | 0.086439 | 0.122347 | 0.101842 |

3 | C01Q1 | 1 | 1 | F1B09 | 1 | 9 | 0.005770 | 0.130344 | 3 | 0 | ... | 2.704576 | 0.040748 | -0.003724 | 0.094732 | 0.135274 | 0.088090 | 0.101767 | 0.101777 | 0.110741 | 0.104332 |

4 | C01Q1 | 1 | 1 | F1B11 | 1 | 11 | 0.017934 | 0.154823 | 12 | 0 | ... | 1.261242 | 0.034836 | -0.000568 | 0.102451 | 0.130640 | 0.084818 | 0.101075 | 0.125169 | 0.133464 | 0.121111 |

5 rows \u00d7 30 columns

\n", "