1) Problem statement
We must set plan safeguards and learning actions for a low‑data, high‑uncertainty catchment. The current TAP assessment is qualitative, precautionary and low confidence in places because model outputs were reported as single events, lacked regime statistics and salinity dynamics, and the panel’s risk scoring showed wide dispersion on some risks (Appendix B–C). We can’t wait for perfect data; we can structure uncertainty so the Minister receives consistent, defensible advice.
2) Where the TAP has got us (and why that’s valuable)
- Assets & linkages: 20 ecological assets with seasonal flow–ecology considerations (Appendix A). Strong foundation for question design.
- Risk framework: Likelihood–consequence ratings over 10 and 50 years. Under high extraction, several assets reach high to extreme categories; low extraction reduces risks by ≥1 category for most assets (Tables 6.1–6.2; Figs 6.4–6.5).
- Quantification already present: Appendix C includes median, min–max risk scores, plus confidence and agreement metrics by asset and scenario (pp.54–59), this provides an on‑ramp for probabilistic pooling—not a rewrite.
- Method limitations: Appendix B and §5.3 record constraints—single “reporting events,” no error bands, limited spatial specificity, and no estuarine salinity modelling (pp.26–27; 52–53). We will work with those limits by reframing questions to the resolution available and flagging “known unknowns.”
3) What changes: a Cooke‑compliant re‑elicitation that builds on Report A
- Calibration: Short seed‑question set (outside Adelaide context) to measure each expert’s statistical accuracy and informativeness.
- Elicitation: Individual 5th/50th/95th (or full quantiles if feasible) for a short list of metrics mapped to assets used in Report A, e.g.:
- Hydrology: dispersed take points; peak drawdown magnitude; rate‑of‑fall percentile; floodplain area inundated ≥X days; number of seasons with longitudinal connectivity.
- Salinity: proportion of late‑dry season days with 0.5–13.6 ppt in tidal fresh reaches (nurseryfish window); frequency of salinity intrusion exceeding historical median.
- Weighting & pooling: Compute performance‑based weights; publish both pooled results and the weights for use in later rounds.
- Traceability to TAP outputs: Every elicited variable is cross‑referenced to the flow–ecology statements (Appendix A) and the assets/scenarios already scored in Tables 6.1–6.2.
4) Integrating Poff’s framing (states, rates, traits; managing to resilience)
- States: risk to habitats/communities (e.g., tidal freshwaters, freshwater floodplain).
- Rates: migration timing, growth, recruitment probabilities under altered rates‑of‑rise/fall.
- Traits: focus on flow‑sensitive functional traits (diadromy, velocity dependence) to keep generality where species data are thin.
- Nonstationarity: evaluate scenarios against historical regimes (e.g., 1900s–1980s drier baseline vs recent wetter decades) so a 2030 take is judged relative to plausible climates, not a single weather year.
See:
- Poff, N.L., 2018. Beyond the natural flow regime? Broadening the hydro-ecological foundation to meet environmental flows challenges in a non-stationary world. Freshw Biol 63, 1011–1021.
- Poff, N.L., Brown, C.M., Grantham, T.E., Matthews, J.H., Palmer, M.A., Spence, C.M., Wilby, R.L., Haasnoot, M., Mendoza, G.F., Dominique, K.C., Baeza, A., 2016. Sustainable water management under future uncertainty with eco-engineering decision scaling. Nature Clim Change 6, 25–34.
- Poff, N.L., Tharme, R.E., Arthington, A.H., 2017. Evolution of Environmental Flows Assessment Science, Principles, and Methodologies, in: Water for the Environment. Elsevier, pp. 203–236.
5) What this delivers to the Minister
- Single pooled and weighted risk metric per asset (with intervals), aligned to Report A’s asset list and scenarios, plus an exposure–response narrative that a lay reader can follow.
- Safeguards linked to metrics (e.g., pump‑start thresholds; rate‑of‑fall caps; estuarine salinity guardrails).
- Learning plan: triggers for model refinement, targeted monitoring, and re‑elicitation cadence.
6) Process flow and timeline (indicative; can compress if needed)
- Week 0–1 – Pre‑brief & scoping check: confirm answerable questions at current resolution; log “parked” questions for future data.
- Week 2 – Calibration session: 60–90 minutes, remote is fine.
- Week 3–4 – Elicitation sessions: 2× 90 minutes per expert (staggered).
- Week 5 – Pooling & diagnostics: compute weights; sensitivity to equal‑weights; produce asset‑level results.
- Week 6 – Policy synthesis sprint: safeguards, ESY narrative options, background report draft pages aligned to TAP outputs.
(7) Roles and safeguards (independence preserved))
- TAP: individual judgements; review of question mapping; validation of pooled outputs; identification of modelling needs and feasible alternatives (e.g., HAND static accumulation; rating‑curve‑based back‑of‑envelope checks).
- Modelling team: quick wins only (rate‑of‑fall stats; days‑above‑bank; simple estuary salinity proxies if full hydrodynamics isn’t available).
- Policy: define decision‑relevant thresholds, tolerable risk bands and safeguard levers.
8) How we handle low data / high uncertainty
- Scope triage: three buckets—(i) answerable now; (ii) answerable with a light modelling/statistic add; (iii) future round.
- Ignorance accounting: mark “insufficient” items as such (Cooke’s squizzle territory) rather than forcing spurious precision.
- Regime context: ask experts for distributions across historical sequences to avoid over‑weighting a single year.
- Robustness checks: compare Cooke‑weighted vs equal‑weighted pools; publish both.
9) Engagement and products
- Products: (1) Single final technical report including updated, policy-relevant risk assessment (building on Part A) and recommendations for future priority research/monitoring/implementation (Part ); Appendices - (A) Calibration report with weights; (B) Pooled risk metrics by asset and scenario; (C) Safeguard menu with triggers; (D) Two‑page lay brief.
- Alignment: Cross‑walk table mapping Report A assets, Appendix C risk summaries, and new pooled metrics so the TAP report and the Minister’s background report do not diverge.
10) Resource ask (lean)
- People: facilitator/analyst (0.4 FTE × 6 weeks), stats support (0.2 FTE), session ops (0.1 FTE).
- TAP time: ~4–5 hours per expert over 4–6 weeks.
11) Risks and mitigations
- Perception of “marking” experts → Frame weights as statistical properties; show equal‑weights comparison.
- Metric–model mismatch → Prioritise metrics we can compute from existing outputs (e.g., hydrograph‑derived rates) or from simple add‑ons; defer the rest.
- Schedule creep → Short, scripted sessions; pre‑circulated question set; stopwatch culture.
Benefits of this approach
- Respects and extends Report A
- We retain the 20 assets and the qualitative reasoning the TAP already assembled; we quantify the same concepts rather than redefine the scope.
- We use the TAP’s own dispersion and confidence diagnostics in Appendix C as the case for calibration and pooling—this is “finishing the job,” not moving the goalposts (pp.54–59).
- We translate §5.3’s modelling limitations into scoping rules for the question set—answer what the model resolution can support, and label the rest for future modelling (pp.26–27).
- We carry the TAP’s precautionary principle forward, but with intervals and weights that make trade‑offs and safeguards defensible at Cabinet.