Models¶
The skagent.models subpackage contains predefined economic models:
Consumer Models¶
Benchmarks¶
Analytically Solvable Consumption-Savings Models
This module implements a collection of discrete-time consumption-savings dynamic programming problems for which the literature has succeeded in writing down true closed-form policies. These represent well-known benchmark problems from the economic literature with established analytical solutions.
THEORETICAL FOUNDATION¶
An entry qualifies for inclusion ONLY if: (i) The problem is a bona-fide dynamic programming problem (ii) The optimal c_t (and any other control) can be written in closed form with no recursive objects left implicit
Standard Timing Convention (Adopted Throughout)¶
t ∈ {0,1,2,…} : period index A_{t-1} : beginning-of-period assets (arrival state, before interest) y_t : non-capital income (realized in period t) R : gross return on assets (R = 1 + r > 1) m_t = R*A_{t-1} + y_t : cash-on-hand (market resources available for consumption) c_t : consumption (control variable) A_t = m_t - c_t : end-of-period assets (state for next period) H_t = E_t[∑_{s=1}^∞ R^{-s} y_{t+s}] : human wealth (present value of future income) W_t = m_t + H_t : total wealth (cash-on-hand plus human wealth) u(c) : period utility function β : discount factor TVC : lim_{T→∞} E_0[β^T u’(c_T) A_T] = 0 (transversality condition)
- skagent.models.benchmarks.crra_utility(c, gamma)¶
CRRA utility: u(c) = c^(1-gamma)/(1-gamma) for gamma != 1, log(c) for gamma == 1
- skagent.models.benchmarks.d1_analytical_lifetime_reward(initial_wealth, discount_factor, interest_rate, time_horizon)¶
Analytical lifetime reward for D-1: Finite horizon log utility.
Forward simulation that exactly matches the D-1 model implementation.
- skagent.models.benchmarks.d1_analytical_policy(states, shocks, parameters)¶
D-1: c_t = (1-β)/(1-β^(T-t)) * W_t (remaining horizon formula)
- skagent.models.benchmarks.d2_analytical_lifetime_reward(cash_on_hand, discount_factor, interest_rate, risk_aversion, income=0.0)¶
Analytical lifetime reward for D-2 using total wealth.
With constant income y > 0, the value function is based on total wealth W = m + H where H = y/r is human wealth.
Optimal policy: c = κ*(m + H) where κ = (R - (βR)^(1/σ))/R Value function: V(W) = κ^(1-σ)/(1-σ) * W^(1-σ) / (1 - β*(βR)^((1-σ)/σ))
- skagent.models.benchmarks.d2_analytical_policy(states, shocks, parameters)¶
D-2: c_t = κ*W_t where κ = (R - (βR)^(1/σ))/R and W_t = m_t + H_t
This is a proper decision function that: 1. Takes arrival states (a), shocks, and parameters as input 2. Computes information set variables (m) from arrival state and parameters 3. Computes total wealth (W = m + H) including human wealth 4. Returns optimal controls based on total wealth
- skagent.models.benchmarks.d3_analytical_lifetime_reward(cash_on_hand, discount_factor, interest_rate, risk_aversion, survival_prob, income=0.0)¶
Analytical lifetime reward for D-3: Blanchard discrete-time mortality.
Similar to D-2 but uses effective discount factor β_eff = s*β where s is survival probability. The mortality risk effectively increases the discount rate, making the agent more impatient.
With income y > 0, uses total wealth W = m + H where H = y/r. Value function: V(W) = κ_eff^(1-σ)/(1-σ) * W^(1-σ) / (1 - β_eff*(β_eff*R)^((1-σ)/σ)) where κ_eff uses β_eff = s*β in place of β.
- skagent.models.benchmarks.d3_analytical_policy(states, shocks, parameters)¶
D-3: c_t = κ_s*(m_t + H) where κ_s = (R - (sβR)^(1/σ))/R
This is a proper decision function that: 1. Takes arrival states (a), shocks, and parameters as input 2. Computes information set variables (m) from arrival state and parameters 3. Returns optimal controls based on information set
- skagent.models.benchmarks.euler_equation_test(model_id, test_points=100)¶
Test Euler equation satisfaction for stochastic analytical solutions
- skagent.models.benchmarks.get_analytical_lifetime_reward(model_id, *args, **kwargs)¶
Get analytical lifetime reward for a benchmark model.
- skagent.models.benchmarks.get_analytical_policy(model_id)¶
Get analytical policy function by model ID
- skagent.models.benchmarks.get_benchmark_calibration(model_id)¶
Get benchmark calibration by model ID
- skagent.models.benchmarks.get_benchmark_model(model_id)¶
Get benchmark model by ID (D-1, D-2, D-3, U-1, U-2) - 5 models remain
- skagent.models.benchmarks.get_custom_validation(model_id)¶
Get custom validation function for model (if it has one)
- skagent.models.benchmarks.get_test_states(model_id, test_points=10)¶
Get test states for model validation by model ID
- skagent.models.benchmarks.list_benchmark_models()¶
List all analytically solvable discrete-time benchmark models
- skagent.models.benchmarks.u1_analytical_policy(states, shocks, parameters)¶
U-1: Permanent Income Hypothesis with β*R = 1
This is a proper decision function that implements the PIH solution: The agent consumes the annuity value of total wealth (financial + human). The martingale property E[c_{t+1}] = c_t is a consequence of this optimal policy.
- skagent.models.benchmarks.u2_analytical_policy(states, shocks, parameters)¶
U-2: PIH with Geometric Random Walk Income using standard timing.
Uses standard timing m_t = R*A_{t-1} + p_t for consistency. With ρ=1, income follows p_t = p_{t-1} * ψ_t (geometric random walk). Human wealth: H_t = p_t / r (present value of geometric random walk income).
Standard timing analytical solution: c_t = (1-β)(m_t + H_t)