This notebook shows how to implement naive beta-delta discounting in PyLCM using
a custom aggregation function and SolveSimulateFunctionPair. It covers:
The beta-delta discount function
Why naive agents need different for solve vs. simulate
A 3-period consumption-savings model with analytical solutions
Verifying numerical results against closed-form solutions
Background: Beta-Delta Discounting¶
Standard exponential discounting uses a single discount factor to weight future utility:
Quasi-hyperbolic (beta-delta) discounting (Laibson, 1997) introduces a present-bias parameter that discounts all future periods relative to the present:
The discount weights are . When this reduces to exponential discounting. When the agent is present-biased — they overweight current utility relative to any future period.
Note the key structure: appears once (not compounded). The weight periods ahead is , not .
PyLCM’s Function and the Naive Agent¶
PyLCM defines the value function recursively via an aggregation function :
The default is .
A naive beta-delta agent believes their future selves will be exponential discounters, but at each decision point applies present bias to the continuation value. This requires two different implementations:
Solve phase (backward induction): use exponential discounting with to compute the perceived continuation values
Simulate phase (action selection): use to choose actions, applying present bias to the perceived
SolveSimulateFunctionPair wraps these two implementations so you can call
simulate once with a single params dict.
Why not just use for both phases? Because the recursion would compound at every step, giving effective weights instead of the correct . Splitting the phases ensures is applied exactly once — at the action selection step.
Setup: A 3-Period Model¶
We use a minimal consumption-savings model:
3 periods: (decisions), (terminal)
1 state: wealth
1 action: consumption
Utility:
Budget: (interest rate )
Constraint:
Terminal: (consume everything)
import jax.numpy as jnp
from lcm import (
AgeGrid,
LinSpacedGrid,
Model,
Regime,
SolveSimulateFunctionPair,
categorical,
)
from lcm.typing import BoolND, ContinuousAction, ContinuousState, FloatND, ScalarInt@categorical(ordered=False)
class RegimeId:
working: int
dead: int
def utility(consumption: ContinuousAction) -> FloatND:
return jnp.log(consumption)
def terminal_utility(wealth: ContinuousState) -> FloatND:
return jnp.log(wealth)
def next_wealth(
wealth: ContinuousState, consumption: ContinuousAction
) -> ContinuousState:
return wealth - consumption
def next_regime(age: float) -> ScalarInt:
return jnp.where(age >= 1, RegimeId.dead, RegimeId.working)
def borrowing_constraint(
consumption: ContinuousAction, wealth: ContinuousState
) -> BoolND:
return consumption <= wealth
def exponential_H(
utility: float,
E_next_V: float,
discount_factor: float,
) -> float:
return utility + discount_factor * E_next_V
def beta_delta_H(
utility: float,
E_next_V: float,
beta: float,
delta: float,
) -> float:
return utility + beta * delta * E_next_VWe build two models:
One with
exponential_Hfor the standard exponential agentOne with
SolveSimulateFunctionPairfor the naive beta-delta agent — solving with exponential and simulating with beta-delta
WEALTH_GRID = LinSpacedGrid(start=0.5, stop=50, n_points=200)
CONSUMPTION_GRID = LinSpacedGrid(start=0.001, stop=50, n_points=500)
dead = Regime(
transition=None,
states={"wealth": WEALTH_GRID},
functions={"utility": terminal_utility},
active=lambda age: age > 1,
)
# Standard exponential model
working_exp = Regime(
actions={"consumption": CONSUMPTION_GRID},
states={"wealth": WEALTH_GRID},
state_transitions={"wealth": next_wealth},
constraints={"borrowing_constraint": borrowing_constraint},
transition=next_regime,
functions={"utility": utility, "H": exponential_H},
active=lambda age: age <= 1,
)
model_exp = Model(
regimes={"working": working_exp, "dead": dead},
ages=AgeGrid(start=0, stop=2, step="Y"),
regime_id_class=RegimeId,
)
# Naive beta-delta model (different H per phase)
working_naive = Regime(
actions={"consumption": CONSUMPTION_GRID},
states={"wealth": WEALTH_GRID},
state_transitions={"wealth": next_wealth},
constraints={"borrowing_constraint": borrowing_constraint},
transition=next_regime,
functions={
"utility": utility,
"H": SolveSimulateFunctionPair(
solve=exponential_H,
simulate=beta_delta_H,
),
},
active=lambda age: age <= 1,
)
model_naive = Model(
regimes={"working": working_naive, "dead": dead},
ages=AgeGrid(start=0, stop=2, step="Y"),
regime_id_class=RegimeId,
)Analytical Solution¶
With utility, the optimal consumption rule is , where is a “marginal propensity to save” denominator.
At (one period before terminal), the agent maximizes:
First-order condition: , giving .
At , the naive agent uses the perceived exponential continuation value , which has coefficient on . So the agent maximizes:
giving .
| Exponential () | ||
| Naive () |
BETA = 0.7
DELTA = 0.95
W0 = 20.0
def analytical_consumption(beta, delta, w0):
"""Return (c0, c1) from the closed-form solution."""
bd = beta * delta
d1 = 1 + bd
d0 = 1 + bd * (1 + delta)
c0 = w0 / d0
c1 = (w0 - c0) / d1
return c0, c1
c0_exp, c1_exp = analytical_consumption(1.0, DELTA, W0)
c0_naive, c1_naive = analytical_consumption(BETA, DELTA, W0)
print(f"{'Type':<15} {'c_0':>8} {'c_1':>8}")
print("-" * 33)
print(f"{'Exponential':<15} {c0_exp:8.4f} {c1_exp:8.4f}")
print(f"{'Naive':<15} {c0_naive:8.4f} {c1_naive:8.4f}")Type c_0 c_1
---------------------------------
Exponential 7.0114 6.6608
Naive 8.7080 6.7820
The naive agent consumes more at than the exponential agent — present bias pulls consumption forward. At , the naive agent also consumes more because .
initial_conditions = {
"age": jnp.array([0.0]),
"wealth": jnp.array([W0]),
"regime": jnp.array([RegimeId.working]),
}
result_exp = model_exp.simulate(
params={"working": {"H": {"discount_factor": DELTA}}},
initial_conditions=initial_conditions,
period_to_regime_to_V_arr=None,
)
df_exp = result_exp.to_dataframe().query('regime == "working"')
print("Exponential agent:")
print(
f" c_0 = {df_exp.loc[df_exp['age'] == 0, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c0_exp:.4f})"
)
print(
f" c_1 = {df_exp.loc[df_exp['age'] == 1, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c1_exp:.4f})"
)AOT compilation: 3 unique functions (3 regime-period pairs, 2 workers)
1/3 working (age 0)
lowering ...
lowered in 55.0ms
2/3 working (age 1)
lowering ...
lowered in 25.3ms
3/3 dead (age 2)
lowering ...
lowered in 10.9ms
compiling working (age 0) ...
compiling working (age 1) ...
compiled working (age 0) 120.2ms
compiling dead (age 2) ...
compiled working (age 1) 122.3ms
compiled dead (age 2) 32.7ms
Starting solution
Age 2 (1 regimes):
finished in 104.8ms
Age 1 (1 regimes):
finished in 4.1ms
Age 0 (1 regimes):
finished in 3.3ms
Solution complete (113.7ms)
Starting simulation
Age 0 (1 regimes):
finished in 950.4ms
Age 1 (1 regimes):
finished in 147.1ms
Age 2 (1 regimes):
finished in 41.1ms
Simulation complete (1.3s)
Exponential agent:
c_0 = 7.0149 (analytical: 7.0114)
c_1 = 6.7143 (analytical: 6.6608)
Naive Agent¶
The naive agent uses model_naive, built with a SolveSimulateFunctionPair
wrapping . During backward induction, exponential_H computes the perceived
value function ; during simulation, beta_delta_H applies present bias for
action selection.
The params dict is the union of both variants’ parameters — discount_factor
for exponential_H, plus beta and delta for beta_delta_H. Each variant
receives only the kwargs it expects.
result_naive = model_naive.simulate(
params={
"working": {
"H": {"discount_factor": DELTA, "beta": BETA, "delta": DELTA},
},
},
initial_conditions=initial_conditions,
period_to_regime_to_V_arr=None,
)
df_naive = result_naive.to_dataframe().query('regime == "working"')
print("Naive agent:")
print(
f" c_0 = {df_naive.loc[df_naive['age'] == 0, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c0_naive:.4f})"
)
print(
f" c_1 = {df_naive.loc[df_naive['age'] == 1, 'consumption'].iloc[0]:.4f} "
f"(analytical: {c1_naive:.4f})"
)AOT compilation: 3 unique functions (3 regime-period pairs, 2 workers)
1/3 working (age 0)
lowering ...
lowered in 29.3ms
2/3 working (age 1)
lowering ...
lowered in 25.7ms
3/3 dead (age 2)
lowering ...
lowered in 8.0ms
compiling working (age 0) ...
compiling working (age 1) ...
compiled working (age 1) 94.6ms
compiling dead (age 2) ...
compiled working (age 0) 139.1ms
compiled dead (age 2) 45.0ms
Starting solution
Age 2 (1 regimes):
finished in 2.8ms
Age 1 (1 regimes):
finished in 3.2ms
Age 0 (1 regimes):
finished in 4.0ms
Solution complete (11.5ms)
Starting simulation
Age 0 (1 regimes):
finished in 209.3ms
Age 1 (1 regimes):
finished in 149.6ms
Age 2 (1 regimes):
finished in 32.5ms
Simulation complete (396.9ms)
Naive agent:
c_0 = 8.7183 (analytical: 8.7080)
c_1 = 6.8145 (analytical: 6.7820)
Comparison¶
The small differences between numerical and analytical solutions are due to grid discretization.
import pandas as pd
comparison = pd.DataFrame(
{
"Agent": ["Exponential", "Naive"],
"c_0 (numerical)": [
df_exp.loc[df_exp["age"] == 0, "consumption"].iloc[0],
df_naive.loc[df_naive["age"] == 0, "consumption"].iloc[0],
],
"c_0 (analytical)": [c0_exp, c0_naive],
"c_1 (numerical)": [
df_exp.loc[df_exp["age"] == 1, "consumption"].iloc[0],
df_naive.loc[df_naive["age"] == 1, "consumption"].iloc[0],
],
"c_1 (analytical)": [c1_exp, c1_naive],
}
)
comparison = comparison.set_index("Agent")
comparison.style.format("{:.4f}")Summary¶
Beta-delta discounting in PyLCM requires no core modifications:
Define two functions — one exponential, one with present bias:
def exponential_H(utility, E_next_V, discount_factor): return utility + discount_factor * E_next_V def beta_delta_H(utility, E_next_V, beta, delta): return utility + beta * delta * E_next_VWrap them in
SolveSimulateFunctionPairso backward induction uses exponential while action selection uses beta-delta :from lcm import SolveSimulateFunctionPair functions={ "utility": utility, "H": SolveSimulateFunctionPair( solve=exponential_H, simulate=beta_delta_H, ), }Provide the union of both variants’ parameters:
params = {"working": {"H": {"discount_factor": delta, "beta": beta, "delta": delta}}}
This split is essential: using for both phases would compound at every recursive step, giving effective weights instead of the correct .