Stock Synthesis User Manual Version 3.30.23

Stock Synthesis User Manual
Version 3.30.23

Richard D. Methot Jr., Chantel R. Wetzel, Ian G. Taylor, Kathryn L. Doering,
Elizabeth F. Perl, and Kelli F. Johnson


NOAA Fisheries
Seattle, WA

October 31, 2024

1 Introduction

Fish population (aka “stock”) assessment models determine the impact of past fishing on the historical and current abundance of the population, evaluate sustainable rates of removals (catch), and project future levels of catch reflecting one or more risk-averse catch rules. These catch rules are codified in regional Fishery Management Plans according to requirements of the Sustainable Fisheries Act. In the U.S., approximately 500 federally managed fish and shellfish populations are managed under approximately 50 Fishery Management Plans. About 200 of these populations are assessed each year, based on a prioritized schedule for their current status. Despite this, many minor species have never been quantitatively assessed. Although the pace is slower than that for weather forecasting, fish stock assessments are operational models for fisheries management.

Assessment models typically assimilate annual catches, data on fish abundance from diverse surveys and fishery sources, and biological information regarding fish body size and proportions at age. A suite of models is available depending on the degree of data availability and unique characteristics of the fish population or its fishery. Where feasible, environmental time series are used as indicators of changes in population or observation processes, especially to improve the accuracy of the projections of abundance and sustainable catch into the future. Such linkages are based principally on correlations given the challenge of conducting field observations on an appropriate scale. The frontier of model development is in the rapid estimation of parameters to include random temporal effects, in the simultaneous modeling of a suite of interacting species, and in the explicit treatment of the spatial distribution of the population.

Assessment models are loosely coupled to other models. For example, an ocean-temperature or circulation model or benthic-habitat map may be directly included in the pre-processing of the fish abundance survey. A time series of a derived ocean factor, like the North Atlantic Oscillation, can be included as an indicator of a change in a population process. Output of a multi-decadal time series of derived fish abundance can be an input to ecosystem and economic models to better understand cumulative impacts and benefits.

Stock Synthesis is an age- and size-structured assessment model in the class of models termed integrated analysis models. Stock Synthesis has evolved since its initial inception in order to model a wide range of fish populations and dynamics. The most recent major revision to Stock Synthesis occurred in 2016, when v.3.30 was introduced. This new version of Stock Synthesis required major revisions to the input files relative to earlier versions (see the Converting Files section for more information). The acronym for Stock Synthesis has evolved over time with earlier versions being referred to as SS2 (Stock Synthesis v.2.xx) and newer versions as SS3 (Stock Synthesis v.3.xx).

SS3 has a population sub-model that simulates a stock’s growth, maturity, fecundity, recruitment, movement, and mortality processes, an observation sub-model estimates expected values for various types of data, a statistical sub-model characterizes the data’s goodness of fit and obtains best-fitting parameters with associated variance, and a forecast sub-model projects needed management quantities. SS3 outputs the quantities, with confidence intervals, needed to implement risk-averse fishery control rules. The model is coded in C++ with parameter estimation enabled by automatic differentiation (admb). Windows, Linux, and iOS versions are available. Output processing and associated tools are in R, and a graphical interface is in QT. SS3 executables and support material is available on GitHub. The rich feature set in SS3 allows it to be configured for a wide range of situations. SS3 has become the basis for a large fraction of U.S. assessments and many other assessments around the world.

This manual provides a guide for using SS3. The guide contains a description of the input and output files and usage instructions. An overview and technical description of the model itself is in Methot and Wetzel (2013). However, SS3 has continued to evolve and grow since the publication in 2013, with this manual reflecting the most up-to-date information regarding SS3. The model and a graphical user interface are available on GitHub with older archived versions also available online at vlab. The vlab site also provides a user forum for posting questions and for accessing various additional materials. An output processor package, r4ss, in R is available for download from GitHub.

Additional guidance for new users can be found on the ss3-website which contains tutorials for getting started and building your own models as well as topic-focused vignettes.

To learn more about how to use Stock Synthesis, see the SS3 website for tutorials to get started and build your own models as well as topic-focused vignettes.

1.1 How To Cite

Please cite Stock Synthesis as:

Methot, R.D. and Wetzel, C.R. (2013). Stock Synthesis: A biological and statistical framework for fish stock assessment and fishery management. Fisheries Research, 142: 86-99. https://doi.org/10.1016/j.fishres.2012.10.012

Please cite the Stock Synthesis User Manual as:

Methot, R. D., Jr., C. R. Wetzel, I. G. Taylor, and K. Doering. (2020). Stock Synthesis User Manual Version 3.30.15. U.S. Department of Commerce, NOAA Processed Report NMFS-NWFSC-PR-2020-05. https://doi.org/10.25923/5wpn-qt71

2 File Organization

2.1 Input Files

  1. starter.ss: required file containing filenames of the data file and the control file plus other run controls (required).

  2. datafile: file containing model dimensions and the data (required)

  3. control file: file containing set-up for the parameters (required)

  4. forecast.ss: file containing specifications for reference points and forecasts (required)

  5. ss3.par (previously ss.par): previously created parameter file that can be read to overwrite the initial parameter values in the control file (optional)

  6. wtatage.ss: file containing empirical input of body weight by fleet and population and empirical fecundity-at-age (optional)

  7. runnumber.ss: file containing a single number used as run number in output to CumReport.sso and in the processing of profilevalues.ss (optional)

  8. profilevalues.ss: file contain special conditions for batch file processing (optional)

2.2 Output Files

  1. data\_echo.ss\_new: Contains the input data as read by the model. In model versions prior to v.3.30.19 a single data.ss\_new file was created that included the echoed data, the expected data values (data\_expval.ss), and any bootstrap data files selected (data\_boot\_x.ss).

  2. data\_expval.ss: Contains the expected data values given the model fit. This file is only created if the value for “Number of datafiles to produce” in the starter file is set to 2 or greater.

  3. data\_boot\_x.ss: A new data file filled with bootstrap data based on the original input data and variances. This file is only created if the value in the “Number of datafiles to produce” in the starter file is set to 3 or greater. A separate bootstrap data file will be written for the number of bootstrap data file requests where x in the file name indicates the bootstrap simulation number (e.g., data\_boot\_001.ss, data\_boot\_002.ss, ...).

  4. control.ss\_new: Updated version of the control file with final parameter values replacing the initial parameter values.

  5. starter.ss\_new: New version of the starter file with annotations.

  6. Forecast.ss\_new: New version of the forecast file with annotations.

  7. warning.sso: This file contains a list of warnings generated during program execution. Starting in v.3.30.20 warnings are categorized into either “Note” or “Warning”. An item marked as “Note” denotes settings that the user may want to revise but do not require any additional changes for the model to run. Items marked with “Warning” are items that may or may not have allowed the model to finish running. Items with a fatal warning caused the model to fail during either reading input files or calculations. Warnings classified as error or adjustment may be causing calculation issues, even if the model was able to finish reading file and running, and should be addressed the user.

  8. echoinput.sso: This file is produced while reading the input files and includes an annotated echo of the input. The sole purpose of this output file is debugging input errors.

  9. Report.sso: This file is the primary report file.

  10. ss\_summary.sso: Output file that contains all the likelihood components, parameters, derived quantities, total biomass, summary biomass, and catch. This file offers an abridged version of the report file that is useful for quick model evaluation. This file is only available in v.3.30.08.03 and greater.

  11. CompReport.sso: Observed and expected composition data in a list-based format.

  12. Forecast-report.sso: Output of management quantities and for forecasts.

  13. CumReport.sso: This file contains a brief version of the run output, output is appended to current content of file so that the results of several runs can be collected together. This is useful when a batch of runs is being processed.

  14. Covar.sso: This file replaces the standard admb ss.cor with an output of the parameter and derived quantity correlations in database format.

  15. ss3.par (previously ss.par): This file contains all estimated and fixed parameters from the model run.

  16. ss.std, ss.rep, ss.cor, etc.: Standard admb output files.

  17. checkup.sso: Contains details of selectivity parameters and resulting vectors. This is written during the first call of the objective function.

  18. Gradient.dat: New for v.3.30, this file shows parameter gradients at the end of the run.

  19. rebuild.dat: Output formatted for direct input to Andre Punt’s rebuilding analysis package. Cumulative output is output to REBUILD.SS (useful when doing mcmc or profiles).

  20. SIS\_table.sso: Output formatted for reading into the nmfs sis.

  21. Parmtrace.sso: Parameter values at each iteration.

  22. posteriors.sso, derived\_posteriors.sso, posterior\_vectors.sso: Files associated with mcmc.

3 Starting Stock Synthesis

SS3 is typically run through the command line interface, although it can also be called from another program, R, the ssi, or a script file (such as a DOS batch file). SS3 is compiled for Windows, Mac, and Linux operating systems. The memory requirements depend on the complexity of the model you run, but in general, SS3 will run much slower on computers with inadequate memory. See Running Stock Synthesis for additional notes on methods of running SS3.

Communication with the program is through text files. When the program first starts, it reads the file starter.ss, which typically must be located in the same directory from which SS3 is being run. The file starter.ss contains required input information plus references to other required input files, as described in the File Organization section. The names of the control and data files must match the names specified in the starter.ss file. File names, including starter.ss, are case-sensitive on Linux and Mac systems but not on Windows. The echoinput.sso file outputs how the executable reads each input file and can be used for troubleshooting when trying to set up a model correctly. Output from SS3 consists of text files containing specific keywords. Output processing programs, such as Excel or R, can search for these keywords and parse the specific information located below that keyword in the text file.

4 Converting Files from Stock Synthesis v.3.24

Converting files from version 3.24 to version 3.30 can be performed by using the program ss_trans.exe. This executable takes v.3.24 files as input and will output v.3.30 input and output files. SS_trans executables are available for v.3.30.01 - v.3.30.17. The transitional executable was phased out with v.3.30.18. If a model needs to be converted from v.3.24 to a recent version, one should use the v.3.30.17 ss_trans.exe available from the v.3.30.17 release page on GitHub to convert the files and then any additional adjustments needed between v.3.30.17 and newer versions should be done by hand. To see the changes that need to be made between v.3.30.17 and the latest release of SS3, please see the change log for v.3.30.19 onward as well as the Excel version of the change log for versions prior to v.3.30.19.

The following file structure and steps are recommended for converting model files:

  1. Create “transition” folder. Place the 4 main model files (control, data, starter, and forecast) from version SS3 v.3.24 within the transition folder along with the SS3 transition executable (ss_trans.exe). One tip is to use the control.ss_new from the SS3 v.3.24 estimated model rather than the original control file which will set all parameter values at the previous estimated mle parameters. Run the transition executable with a command like ss_trans -stopph -1 which will write ss_new files but not go through the population dynamics or produce other output (whereas -stopph 0 will run through the dynamics once without estimation and produce Report.sso and other output files, but may not produce the ss_new files if there is any issue with the model setup).

  2. Create “converted” folder. Place the ss_new (data.ss_new, control.ss_new, starter.ss_new, forecast.ss_new) files created by the transition executable contained within the “transition” folder into this new folder. Rename the ss_new files to the appropriate suffixes and change the names in the starter.ss file accordingly.

  3. Review the control (control.ss_new) file to determine that all model functions converted correctly. The structural changes and assumptions for some advanced model features are too complicated to convert automatically. See below for some known features that may not convert. When needed, it is recommended to modify the control.ss_new file, the converted control file, for only the features that failed to convert properly.

  4. Change the max phase to a value greater than the last phase in which the parameter is set to be estimated within the control file. Run the new v.3.30 executable (ss3.exe) within the “converted” folder using the renamed ss_new files created from the transition executable.

  5. Compare likelihood and model estimates between the v.3.24 and v.3.30 model versions.

  6. If desired, update to versions of Stock Synthesis > v.3.30.17 by running the new v.3.30 input files with the higher executable.

Some options have been substantially changed in v.3.30, which impedes the automatic converting of v.3.24 model files. Known examples of v.3.24 options that cannot be converted, but for which better alternatives are available in v.3.30 are:

  1. The use of catchability (Q) deviations,

  2. Complex birth seasons,

  3. Environmental effects on spawner-recruitment parameters,

  4. Setup of time-varying quantities for models that used the no-longer-available features (e.g., logistic bound constraint).

5 Starter File

5.1 Reading the Manual’s format

SS3 begins by reading the file starter.ss. The starter file contains need information on the names of the control and data files, run conditions, and output specifications. The term COND appears in the “Typical Value” column of this documentation (it does not actually appear in the model files), it indicates that the following section is omitted except under certain conditions, or that the factors included in the following section depend upon certain conditions. In most cases, the description in the definition column is the same as the label output to the ss_new files.

5.2 Terminology for Fishing Mortality, \(F\)

Here we introduce some terminology related to fishing mortality, \(F\). This will provide context for some of the quantities that will be read from the starter file and used throughout the document

\(f\) is fleet.

\(t\) is a time step; continuous across years \(y\) and seasons \(s\); equivalent to year if only 1 season.

\(a\) is age.

\(s_{t,f,a}\) is age-specific selectivity for a fleet. If selectivity is length-specific, then age-specific selectivity due to length-selectivity is calculated as the dot product across length bins of length selectivity and the normal (or log-normal) distribution of length-at-age. If selectivity is both length- and age-based, which is an entirely normal concept in SS3, then age selectivity due to length selectivity is calculated first, then multiplied by the direct age selectivity. This compound age selectivity is used in the mortality calculations and is reported as Asel2 in report:32 of Report.sso. Selectivity can be sex-specific, and different growth morphs and platoons can have different age-selectivity due to the effect of length-selectivity on their unique size-at-age. This added dimension, \(g\), for biological group is not included in the nomenclature here but exists in all the SS3 calculations.

\(F_{t,f,a}\) is fishing mortality at age for fleet \(f\). There is no subscript for area because each fleet is defined to operate in only one area.

\(F_{t,f}'\) is a fleet’s fishing mortality for the age that has selectivity equal to 1.0. This is also termed F’ or \(\text{full\_}F\) in the SS3 system. If your model is using parameters for \(F\), then the parameter values are for the \(F'\). Note that some selectivity curves, like double normal, are explicit about having a maximum of 1.0. But other curves like logistic and combinations of length-selectivity and growth, may produce an age-selectivity curve that never reaches 1.0 and time-varying non-parametric selectivity will produce values > 1.0 routinely. In all cases, the resultant \(F_{t,f,a}\) comes from \(F_{t,f}' * s_{t,f,a}\), so the range of the \(F'\) compensates for the scale of the \(s\).

Apical selectivity is the maximum age-specific selectivity and is not explicit in any internal calculation in SS3, it is just for reporting. If selectivity has a maximum value of 1.0, then \(\text{apical\_}F\) and \(\text{full\_}F\) are identical.

Fully-selected age range is not explicitly used in SS3, especially because SS3 applications routinely have multiple fleets with different selectivity patterns that may have little overlap.

Fbar is the average \(F\) over a user specified range of ages, implicitly the fully-selected range for the total \(F\) from all the fleets. Some SS3 output options will display Fbar.

\(\text{Annual\_}F\) is essentially the same as Fbar and is an output quantity.

\(F\text{\_std}\) is an output quantity that may be based on \(\text{annual\_}F\) or other calculated quantities like exploitation rate. Importantly, the output values of \(F\text{\_std}\) may be presented as a ratio relative to an equivalent benchmark (reference point) quantity; e.g., \(F / F_{MSY}\). Further, the variance of \(F\text{\_std}\) will be calculated and output.

\(C_{t,f}\) is fleet-specific catch in a time step.

\(B_{t,f}\) is fleet specific available biomass, e.g., total biomass filtered by fleet-specific age selectivity, \(s_{t,f,a}\). Note that this is not adjusted by the \(max(s_{t,f,a})\).

5.3 Starter File Options (starter.ss)

Value Options Description
#C this is a starter comment Must begin with #C then rest of the line is free form All lines in this file beginning with #C will be retained and written to the top of several output files
data_file.dat File name of the data file
control_file.ctl File name of the control file
0 Initial Parameter Values: Do not set equal to 1 if there have been any changes to the control file that would alter the number or order of parameters stored in the ss3.par file. Values in ss3.par can be edited, carefully. Do not run ss_trans.exe from a ss3.par from v.3.24.
0 = use values in control file; and
1 = use ss3.par after reading setup in the control file.
1 Run display detail: With option 2, the display shows value of each negative log likelihood component for each iteration, and it displays where crash penalties are created
0 = none other than admb outputs;
1 = one brief line of display for each iteration; and
2 = fuller display per iteration.
1 Detailed age-structure report: Option 0 will forgo the writing of the Report file, but the ss_summary file will be written that has minimal derived and estimated quantities. This is a useful option for some data-limited assessment approaches (e.g., xsss or sss). Option 1 will write out the full Report file. Option 2 will write out select items in the Report file and will omit some more detailed sections (e.g., numbers-at-age).
0 = minimal output for data-limited methods;
1 = include all output (with wtatage.ss_new);
2 = brief output, no growth; and
3 = custom output
Custom report options: First value: -100 start with minimal items or -101 start with all items; Next Values: A list of items to add or remove where negative number items are removed and positive number items added, -999 to end. The reporting numbers for each item that can be selected or omitted are shown in the Report file next to each section key word.
-100
-5
9
11
15
-999
0 Write 1st iteration details: This output is largely unformatted and undocumented and is mostly used by the developer.
0 = omit; and
1 = write detailed intermediate calculations to echoinput.sso during first call.
0 Parameter Trace: This controls the output to parmtrace.sso. The contents of this output can be used to determine which values are changing when a model approaches a crash condition. It also can be used to investigate patterns of parameter changes as model convergence slowly moves along a ridge. In order to access parameter gradients option 4 should be selected which will write the gradient of each parameter with respect to each likelihood component
0 = omit;
1 = write good iteration and active parameters;
2 = write good iterations and all parameters;
3 = write every iteration and all parameters; and
4 = write every iteration and active parameters.
Cumulative Report: Controls reporting to the file Cumreport.sso. This cumulative report is most useful when accumulating summary information from likelihood profiles or when simply accumulating a record of all model runs within the current subdirectory
0 = omit;
1 = brief; and
2 = full.
1 Full Priors: Turning this option on (1) adds the log likelihood contribution from all prior values for fixed and estimated parameters to the total negative log likelihood. With this option off (0), the total negative log likelihood will include the log likelihood for priors for only estimated parameters.
0 = only calculate priors for active parameters; and
1 = calculate priors for all parameters that have a defined prior.
1 Soft Bounds: This option creates a weak symmetric beta penalty for the selectivity parameters. This becomes important when estimating selectivity functions in which the values of some parameters cause other parameters to have negligible gradients, or when bounds have been set too widely such that a parameter drifts into a region in which it has negligible gradient. The soft bound creates a weak penalty to move parameters away from the bounds.
0 = omit; and
1 = use.
1 Number of Data Files to Output: All output files are sequentially output to data_echo.ss_new and need to be parsed by the user into separate data files. The output of the input data file makes no changes, retaining the order of the original file. Output files 2-N contain only observations that have not been excluded through use of the negative year denotation, and the order of these output observations is as processed by the model. At this time, the tag recapture data is not output to data_echo.ss_new. As of v.3.30.19, the output file names have changed; a separate file is created for the echoed data (data_echo.ss_new), the expected data values given the model fit (data_expval.ss), and any requested bootstrap data files (data_boot_x.ss where x is the bootstrap number). In versions before v.3.30.19, these outputs were printed to a single file called data.ss_new.
0 = none; As of v.3.30.16, none of the .ss_new files will be produced;
1 = output an annotated replicate of the input data file;
2 = add a second data file containing the model’s expected values with no added error; and
3+ = add N-2 parametric bootstrap data files.
8 Turn off estimation: The 0 option is useful for (-1) quickly reading in a messy set of input files and producing the annotated control.ss_new and data_echo.ss_new files, or (0) examining model output based solely on input parameter values. Similarly, the value option allows examination of model output after completing a specified phase. Also see usage note for restarting from a specified phase.
-1 = exit after reading input files;
0 = exit after one call to the calculation routines and production of sso and ss_new files; and
<positive value> = exit after completing this phase.
1000 mcmc burn interval Number of iterations to discard at the start of an mcmc run.
200 mcmc thin interval Number of iterations to remove between the main period of the mcmc run.
0.0 Jitter: The jitter function has been revised with v.3.30. Starting values are now jittered based on a normal distribution with the \(pr(P_{MIN}) = 0.1\%\) and the \(pr(P_{MAX}) = 99.9\%\). A positive value here will add a small random jitter to the initial parameter values. When using the jitter option, take care when defining the low and high bounds for parameter values and particularly -999 or 999 should not be used to define bounds for estimated parameters.
0 = no jitter done to starting values; and
> 0 starting values will vary with larger jitter values resulting in larger changes from the parameter values in the control or par file.
-1 sd Report Start:
-1 = begin annual sd report in start year; and
<year> = begin sd report this year.
-1 sd Report End:
-1 = end annual sd report in end year;
-2 = end annual sd report in last forecast year; and
<value> = end sd report in this year.
2 Extra sd Report Years: In a long time series application, the model variance calculations will be smaller and faster if not all years are included in the sd reporting. For example, the annual sd reporting could start in 1960 and the extra option could select reporting in each decade before then.
0 = none; and
<value> = number of years to read.
COND: If Extra sd report years > 0
1940 1950 Vector of years for additional sd reporting. The number of years needs to equal the value specified in the above line (Extra sd Report Years).
0.0001 Final convergence This is a reasonable default value for the change in log likelihood denoting convergence. For applications with much data and a large total log likelihood value, a larger convergence criterion may still provide acceptable convergence.
0 Retrospective year: Adjusts the model end year and disregards data after this year. May not handle time varying parameters completely.
0 = none; and
-x = retrospective year relative to end year.
0 Summary biomass min age Minimum integer age for inclusion in the summary biomass used for reporting and for calculation of total exploitation rate.
1 Depletion basis: Selects the basis for the denominator when calculating degree of depletion in reproductive output (a.k.a., ssb). The calculated values are reported to the sd report relative to a fraction, X, of a comparable quantity calculated in the benchmark section or elsewhere.
0 = skip;
1 = \(X*B_{0}\); Relative to virgin spawning biomass.
2 = \(X*B_{MSY}\); Relative to spawning biomass that achieves msy.
3 = \(X*B_{styr}\); and Relative to model start year spawning biomass.
4 = \(X*B_{endyr}\). Relative to spawning biomass in the model end year.
5 = \(X*Dynamic~B_{0}\) Relative to the calculated dynamic \(B_{0}\).
use tens and hundreds digits to invoke multi-year trailing average
append 0.1 to invoke ln(ratio)
1.0 Fraction (X) for depletion denominator Value for use in the calculation of the ratio for \(SB_{y}/(X*SB_{0})\).
spr report scaling: spr is the equilibrium ssb per recruit that would result from the current year’s F-at-age. The quantities identified by 1, 2, and 3 here are calculated in the benchmarks section. Then the one specified here is used as the selected denominator in a ratio with the annual value of (1 - spr). This ratio (and its variance) is reported to the sd report output for the years selected above in the sd report year selection.
0 = skip;
1 = use \(1-SPR_{TARGET}\);
2 = use \(1-SPR\) at \(MSY\);
3 = use \(1-SPR\) at \(B_{TARGET}\); and
4 = no denominator, so report actual \(1-SPR\) values.
4 \(F\text{\_std}\) reporting units: An additional proxy for fishing intensity is based on the fraction of the population that is caught. As with SPR, the selected quantity will be calculated annually and in the benchmarks section. The ratio of the annual value to the selected (see \(F\) report basis below) benchmark value is reported to the sd report vector as \(F\text{\_std}\). Options 1 and 2 ignore details of age-structure and are simply based on annual exploitation rate across all areas. If most catch occurs in one area and there is little movement between areas, this ratio is not informative about the \(F\) in the area where the catch occurs. Option 3 is a simple sum of the full \(F\)’s by fleet, so may provide non-intuitive results when there are multiple areas or seasons or when the selectivities by fleet do not have good overlap in age. Option 4 is a real \(\text{annual\_}F\) calculated as a numbers weighted \(F\) for a specified range of ages. The \(F\) for each age is calculated as \(Z-M\) where \(Z\) and \(M\) are each calculated as \(ln(N_{t+1}/N_{t})\) with and without \(F\) active, respectively. The numbers are summed over all biology morphs and areas for the beginning of the year, so subsumes any seasonal pattern.
0 = skip;
1 = exploitation rate in biomass;
2 = exploitation rate in numbers;
3 = sum(\(F\)’s by fleet);
4 = Fbar: numbers weighted \(F\) for range of ages using \(Z-M\) approach; and
5 = Fbar: unweighted average \(F\) for range of ages.
Note that these \(F\) statistics do not depend upon whether the \(F\) approach uses mid-season exploitation rate (Pope’s), or continuous \(F\); nor whether the continuous \(F\) is based on parameters or the hybrid calculation method. Read more about the \(F\) method in the control file section of the report. For more information on \(F\) reporting, see Metrics for Fishing Mortality.
COND: If \(F\text{\_std}\) reporting \(\geq\) 4 Specify range of ages. Upper age must be less than max age because of incomplete handling of the accumulator age for this calculation.
3 7 Age range if \(F\text{\_std}\) reporting = 4.
1 \(F\text{\_std}\) scaling: \(F\text{\_std}\) is typically reported as a ratio to the value of an equivalent \(F\) calculation that would occur at the benchmark level of fishing. Here the user selects the denominator for that ratio. This ratio can be presented as a multi-year trailing average in \(F\) or as a ln(ratio). For example, 122.1 would do a 12-year trailing average of the ratio using \(F_{MSY}\) and present the result as the ln(ratio).
0 = not relative, report raw values;
1 = use \(F\text{\_std}\) value relative to \(SPR_{TARGET}\);
2 = use \(F\text{\_std}\) value relative to \(F_{MSY}\); and
3 = use \(F\text{\_std}\) value relative to \(F_{B_{TARGET}}\).
use tens and hundreds digits to invoke multi-year averaged \(F\text{\_std}\)
append 0.1 to the integer to invoke ln(ratio)
mcmc output detail: Specify format of mcmc output. This input requires the specification of two items; the output detail and a bump value to be added to the \(ln(R_{0})\) in the first call to mcmc. A bias adjustment of 1.0 is applied to recruitment deviations in the mcmc phase, which could result in reduced recruitment estimates relative to the mle when a lower bias adjustment value is applied. A small value, called the “bump”, is added to the \(ln(R_{0})\) for the first call to mcmc in order to prevent the stock from hitting the lower bounds when switching from mle to mcmc. If you wanted to select the default output option and apply a bump value of 0.01 this is specified by 0.01 where the integer value represents the output detail and the decimal is the bump value.
0 = default;
1 = output likelihood components and associated lambda values;
2 = write report for each mceval; and
3 = make output subdirectory for each mcmc vector.
alk tolerance level This effect is disabled in code, enter 0.
COND: Seed Value (i.e., 1234) Specify a seed for data generation. This feature is not available in versions prior to v.3.30.15 This is an optional input value allowing for the specification of a random number seed value. If you do not want to specify a seed, skip this input line and end the starter file with the check value (3.30).
Model version check value. A value of 3.30 indicates that the control and data files are currently in v.3.30 format. A value of 999 indicates that the control and data files are in a previous v.3.24 version. The ss_trans.exe executable should be used to convert the v.3.24 control.ss_new and data_echo.ss_new files to the new format. All ss_new files are in the v.3.30 format, so starter.ss_new has v.3.30 on the last line. The mortality-growth parameter section has a new sequence and v.3.30 cannot read a ss3.par file produced by v.3.24 and earlier, so ensure that the read par file option at the top of the starter file is set to 0. The Converting Files from Stock Synthesis v.3.24 section has additional information on model features that may impede file conversion.

6 Forecast File

The specification of options for forecasts is contained in the mandatory input file named forecast.ss. See Forecast Module: Benchmark and Forecasting Calculations for additional details.

The term COND appears in the “Typical Value” column of this documentation (it does not actually appear in the model files) and indicates that the following section is omitted except under certain conditions, or that the factors included in the following section depend upon certain conditions. In most cases, the description in the definition column is the same as the label output to the ss_new files.

6.1 Forecast File Options (forecast.ss)

Value Options Description
1 Benchmarks/Reference Points: SS3 checks for consistency of the forecast specification and the benchmark specification. It will turn on benchmarks if necessary and report a warning.
0 = skip/omit;
1 = calculate \(F_{SPR}\), \(F_{B_{TARGET}}\), and \(F_{MSY}\);
2 = calculate \(_{SPR}\), \(F_{MSY}\), \(F_{0.10}\); and
3 = add \(F\) at \(B_{LIMIT}\)
1 msy Method: Specifies basis for calculating a single population level \(F_{MSY}\) value.
1 = \(F_{SPR}\) as proxy;
2 = calculate \(F_{MSY}\);
3 = \(F_{B_{TARGET}}\) as proxy or \(F_{0.10}\);
4 = \(F_{end year}\) as proxy; and
5 = \(F_{MEY}\).
COND: msy Method = 5
1 mey units
1 = dead biomass;
2 = dead biomass without excluded bycatch fleet;
3 = retained biomass; and
4 = profits using price and costs.
mey options - Fleet, Cost/F, Price/F, and Include \(F_{MEY}\) in Optimization To calculate the \(F_{MEY}\) enter fleet number, the cost per fishing mortality, price per mt, and whether optimization should adjust the fleet’s \(F\) or keep it at the mean from the benchmark years (0 = no, 1= yes). Take care when scaling the values used for cost/\(F\) and price/mt. Units in the example show cost = 0 and price = 1, so it will be identical to msy in weight. Note, if a fleet’s catch is excluded from the \(F_{MEY}\) search, its catch or profits are still included in the msy value using historical \(F\) levels from benchmark years.
-9999 0 0 0
0.45 \(SPR_{TARGET}\) SS3 searches for the \(F\) multiplier (\(F\text{mult}\)) that will produce this level of spawning biomass per recruit (reproductive output) relative to the unfished value.
0.40 Relative Biomass Target SS3 searches for the \(F\) multiplier that will produce this level of spawning biomass relative to unfished value. This is not “per recruit” and takes into account the spawner-recruitment relationship.
COND: Benchmarks = 3 \(B_{LIMIT}\) as a fraction of the \(B_{MSY}\) where a negative value will be applied as a fraction of \(B_{0}\)
-0.25
0 0 0 0 0 0 0 0 0 0 Benchmark Years: Requires 5 pairs of year values over which the mean of derived vectors will be calculated to use in the benchmark (e.g., msy) calculations. First pair of years is for biology (e.g., growth, natural mortality, maturity, fecundity); second is selectivity; third is relative \(F\)s among fleets; fourth is movement and recruitment distribution; fifth is stock-recruitment (as the parameters, not as derived quantities). If a factor is not time-varying, select the first model year for the beginning year for the factor or else the variance will be artificially reduced.
-999: start year;
> 0: absolute year; and
\(<=\) 0: year relative to end year.
1 Benchmark Relative \(F\) Basis: The specification does not affect year range for selectivity and biology.
1 = use year range; and
2 = set range for \(\text{rel}F\) same as Forecast.
2 This input is required but is ignored if benchmarks are turned off. This determines how forecast catches are calculated and removed from the population which is separate from the “MSY Method” above. If \(F_{MSY}\) is selected, it uses whatever proxy (e.g., \(F_{SPR}\) or \(F_{B_{TARGET}}\)) is selected in the “MSY Method” row.
-1 = none, no forecast years;
0 = simple, single forecast year calculated;
1 = use \(F_{SPR}\);
2 = use \(F_{MSY}\);
3 = use \(F_{B_{TARGET}}\) or \(F_{0.10}\);
4 = set to mean \(F\) scalar for the forecast relative \(F\) years below; and
5 = input annual \(F\) scalar.
10 N forecast years (must be >= 1) At least one forecast year now required if the Forecast option above is >= 0 (Note: v.3.24 allowed zero forecast years).
1 \(F\) scalar/multiplier Only used if Forecast option = 5 (input annual \(F\) scalar), but is a required line in the forecast file.
There are 2 options for entering Forecast Years:
Option 1: This approach for forecast year ranges is no longer recommended because blocks, random effects, and other time-varying parameter changes can now operate on forecast years and the new approach provides better control averaging.
0 0 0 0 0 Enter 6 Forecast Year Values To continue to use this pre-v.3.20.22 approach, enter 6 values: beginning and ending years for selectivity, relative \(F\)s, and recruitment distribution. These are used to create means over the specified range of years. Values can be entered as the actual year, -999 for start year, or values of 0 or -integer to be relative endyr. It is important to note:
– Relative \(F\) for bycatch only fleets is scaled just like other fleets.
– For selectivity averaging with the new approach the method code is “1”, whereas with the old Forecast Selectivity Option, the code was “1” for using time-varying parameters. SS3 accounts for this change internally.
– Whenever calculating means, the calculated mean will have artificially low variance than if a minimal range of years is selected.
0 : Determines selectivity used in the forecast years. Selecting 1 will allow for application of time-varying selectivity parameters (e.g., random walk) to continue into the forecast period. This setting is not included in Option 2.
0 = forecast selectivity means from year range; and
1 = forecast selectivity from annual time-varying parameters.
Option 2: To use the new approach, enter -12345 and omit entry of the Forecast Selectivity Option.
-12345 Invoke New Forecast Format Biology and selectivity vectors are updated annually in the forecast according to their time-varying parameters. Be sure to check the end year of the blocks and the deviation vectors. Input in this section directs the creation of means over historical years to override any time-varying changes. To invoke taking the mean of a range of historical recruitments after all adjustments and deviations were applied, see the Base recruitment in forecast option. See the Example New Forecast Format Input below.
Factor Method Start Year End Year
1 1 2002 2003 # natural mortality
4 1 2016 2018 # recruitment distribution
10 1 -999 0 # selectivity
11 1 -3 0 # relative \(F\)
12 1 2006 2014 # recruitment
-9999 -1 -1 -1
Factor Factors implemented thus far. Terminate with -9999.
1 = natural mortality (M);
4 = recruitment distribution;
5 = migration;
10 = selectivity;
11 = relative \(F\)
12 = recruitment
Method
0 (or omitted) = continue using time_vary parameters;
1 = use means of derived factor;
2 (future) = means parameter then apply as if time_vary
Start Year Enter the actual year or values of 0, -999 to be styr, or -integer to be relative endyr.
End Year Enter the actual year or values of 0 or -integer to be relative endyr.
1 Control Rule Method: Used to apply reductions (“buffer”) to either the catch or \(F\) based on the control rule during the forecast period. The buffer value is specified below via the Control Rule Buffer.
0 = none (additional control rule inputs will be ignored);
1 = catch as function of ssb, buffer on \(F\);
2 = \(F\) as function of ssb, buffer on \(F\);
3 = catch as function of ssb, buffer on catch (U.S. West Coast groundfish approach); and
4 = \(F\) is a function of ssb, buffer on catch.
0.40 Control Rule Inflection Relative biomass level to unfished spawning biomass above which \(F\) is constant at control rule \(F\). If set to -1 the ratio of \(B_{MSY}\) to the unfished spawning biomass will automatically be used.
0.10 Control Rule Cutoff Relative biomass level to unfished spawning biomass below which \(F\) is set to 0 (management threshold).
0.75 Control Rule Buffer (multiplier between 0-1.0 or -1) Control rule catch or \(F_{TARGET}\) as a fraction of selected catch or \(F_{MSY}\) proxy. The buffer will be applied to reduce catch from the estimated ofl. The buffer value is a value between 0-1.0 where a value of 1.0 would set catch equal to the ofl. As example if the buffer is applied to catch (Control Rule option 3 or 4 above) the catch will equal the buffer times the ofl. Alternatively a value of -1 will allow the user to input a forecast year specific control rule fraction (added in v.3.30.13).
Year and control rule buffer value. Can enter a value for each year, or starting sequence of years. The final control rule buffer value will apply to all subsequent forecast years.
2019 0.8
2020 0.6
2021 0.5
-9999 0
3 Number of forecast loops SS3 sequentially goes through the forecast up to three times.
1 = ofl only;
2 = abc control rule and buffers;
3 = set catches equal to control rule or input catch and redo forecast implementation error.
3 First forecast loop with stochastic recruitment If this is set to 1 or 2, then ofl and abc will be calculated as if there was perfect knowledge about future recruitment deviations. If running a long forecast (e.g., 10-100 years) it is recommended to run without recruitment deviations because running long forecasts where recruitment deviations aren’t turned on until loop 3 may have poor results (e.g., crashed stock), especially if below mean forecast recruitment is assumed (via Base recruitment in forecast option, next input line).
Base recruitment in forecast: This option controls the base recruitment (to which deviations are applied) in the forecast, or taking the mean of a range of historical recruitments after all adjustments and deviations were applied. For options 1 and 2, the next value read is a scalar applied to the base. Option 4 requires the user set the forecast recruitment deviation phase to negative (specifically -1 to get constant mean in mcmc) and the last year of recruitment deviations is the end year.
0 = spawner recruit curve;
1 = value*(spawner recruit curve);
2 = value*(virgin recruitment);
3 = deprecated; and
4 = mean recruitment from Forecast Year range above, recruitment distribution not affected.
0.7 Scalar/multiplier applied to base Scalar is ignored unless option 1 and 2 is selected.
0 Not used
2015 First year for caps and allocations Should be after years with fixed inputs.
0 Implementation Error The standard deviation of the natural log of the ratio between the realized catch and the target catch in the forecast. (set value > 0.0 to cause implementation error deviations to be an estimated parameter that will add variance to forecast).
0 Do West Coast Groundfish Rebuilder Output: Creates a rebuild.dat file to be used for U.S. West Coast groundfish rebuilder program.
0 = omit U.S. West Coast rebuilder output; and
1 = do the abbreviated U.S. West Coast rebuilder output
Rebuilder catch (Year Declared): Input line is required even if Rebuilder = 0, specified in the line above.
> 0 = year first catch should be set to zero; and
-1 = set to 1999.
2004 Rebuilder start year (Year Initial): Input line is required even if Rebuilder = 0, specified two line above.
> 0 = year for current age structure; and
-1 = set to end year +1.
1 Fleet Relative \(F\):
1 = use first-last allocation year; and
2 = read season(row) \(\times\) fleet (column) set below.
2 Basis for maximum forecast catch: The maximum basis for forecasted catch will be implemented for the “First year for caps and allocations” selected above. The maximum catch (biomass or numbers) by fleet is specified below on the “Maximum total forecast catch by fleet” line.
2 = total catch biomass;
3 = retained catch biomass;
5 = total catch numbers; and
6 = retained total numbers.
COND 2: Conditional input for fleet relative \(F\) (Enter: Season, Fleet, Relative \(F\))
1 1 0.6 Fleet allocation by relative \(F\) fraction. The fraction of the forecast \(F\) value. For a multiple area model user must define a fraction for each fleet and each area. The total fractions must sum to one over all fleets and areas.
1 2 0.4
-9999 0 0 Terminator line
1 50 Maximum total forecast catch by fleet (in units specified above total catch/numbers, retained catch/numbers) Enter fleet number and its maximum value. Last line of the entry must have fleet number = -9999.
-9999 -1
-9999 -1 Maximum total catch by area Enter area number and its max. Last line of the entry must have area number = -9999.
-1 = no maximum
1 1 Fleet assignment to allocation group Enter list of fleet number and its allocation group number if it is in a group. Last line of the entry must have fleet number = -9999.
-9999 -1
COND: if N allocation groups is > 0 Enter a year and the allocation fraction to each group for that year. SS3 will fill those values to the end of the forecast, then read another year from this list. Terminate with -9999 in year field. Annual values are rescaled to sum to 1.0.
2002 1 Allocation to each group for each year of the forecast
-9999 1
-1 Basis for forecast catch: The dead or retained value in the forecast catch inputs will be interpreted in terms of numbers or biomass based on the units of the input catch for each fleet.
-1 = Read basis with each observation, allows for a mixture of dead, retained, or \(F\) basis by different fleets for the fixed catches below;
2 = Dead catch (retained + discarded);
3 = Retained catch; and
99 = Input \(\text{full\_}F\) (the \(text{full\_}F\) value for the model years can be found in the EXPLOITATION section in the Report file).

Forecast catch input:

Example forecast catch input with basis
COND: == -1 Forecasted catches - enter one line per number of fixed forecast year catch (year-specific \(F\) or catch, including bycatch)
2012 1 1 1200 2 Year & Season & Fleet & Catch or \(F\) value & Basis
2013 1 1 1400 3 Year & Season & Fleet & Catch or \(F\) value & Basis
-9999 0 0 0 0 Indicates end of inputted catches to read
Example forecast catch input without basis
COND: > 0 Forecasted catches - enter one line per number of fixed forecast year catch (year-specific \(F\) or catch, including bycatch)
2012 1 1 1200 Year & Season & Fleet & Catch or \(F\) value
2013 1 1 1200 Year & Season & Fleet & Catch or \(F\) value
-9999 0 0 0 Indicates end of inputted catches to read
999 End of Input

6.2 Including a New Fleet in the Forecast

As of v.3.30.16 users can have a forecast fleet without catches during the modeled period. Previously, fleets in the forecast period were required to have input catches at some amount during the modeled period. SS3 now has capability to have a fleet with no input catches during the modeled period that could be used as a fleet during the forecast.

6.3 Benchmark Calculations

This feature of SS3 is designed to calculate an equilibrium fishing rate intended to serve as a proxy for the fishing rate that would provide maximum sustainable yield (\(F_{MSY}\)). Then in the forecast module these fishing rates can be used in the projections.

Four reference points can be calculated by SS3. The first is the estimate of \(F_{MSY}\) within the model, while the other reference points use proxies or an alternative estimated point.

6.3.0.1 Estimation


Each of the potential reference points is calculated by searching across a range of \(F\) multiplier levels, calculating equilibrium biomass and catch at that \(F\), using Newton-Raphson method to calculate a better \(F\) multiplier value, and iterating a fixed number of times to achieve convergence on the desired level.

6.3.0.2 Calculations


The calculation of equilibrium biomass and catch uses the same code that is used to calculate the virgin conditions and the initial equilibrium conditions. This equilibrium calculation code takes into account all morph, timing, biology, selectivity, and movement conditions as they apply while doing the time series calculations. You can verify this by running SS3 to calculate \(F_{MSY}\) then hard-wire initial \(F\) to equal this value, use the F_method approach 2 so each annual \(F\) is equal to \(F_{MSY}\) and then set forecast \(F\) to be the same \(F_{MSY}\). Then run SS3 without estimation and no recruitment deviations. You should see that the population has an initial equilibrium abundance equal to \(B_{MSY}\) and stays at this level during the time series and forecast.

6.3.0.3 Catch Units


For each fleet, SS3 always calculates catch in terms of biomass (mt) and numbers (1000s) for encountered (selected) catch, dead catch, and retained catch. These three categories differ only when some fleets have discarding or are designated as a bycatch fleet. SS3 uses total dead catch biomass as the quantity that is principally reported and the quantity that is optimized when searching for \(F_{MSY}\). The quantity “dead catch” may occasionally be referred to as “yield”.

6.3.0.4 Biomass Units


The principle measure of fish abundance, for the purpose of reference point calculation, is female reproductive output. This is referred to as ssb and sometimes just “B” because the typical user settings have one unit of reproductive output (fecundity) per kg of mature female biomass. So when the output label says \(B_{MSY}\), this is actually the female reproductive output at the proxy for \(F_{MSY}\).

6.3.0.5 Fleet Allocation


An important concept for the reference point calculation is the allocation of fishing rate among fleets. Internally, this is benchmark years relative \(F\) (\(text{Bmark\_rel}F\) (\(f,s\))) and it is the fraction of the \(F\) multiplier assigned to each fleet, \(f\) and season, \(s\). The value, \(F\text{mult} * \text{Bmark\_rel}F\)(\(f,s\)), is the \(F\) level for a particular fleet in a particular season and for the age that has a selectivity of 1.0. Other ages will have different \(F\) values according to their selectivity.

6.3.0.6 Virgin vs. Unfished Spawning Biomass


The concept of unfished spawning biomass, (written as SSB_unfished in SS3 input and output files), is important to the reference points calculations. Unfished spawning biomass can be potentially different from virgin spawning biomass (written as SSB_virgin in SS3 output files).

6.4 Forecast Recruitment Adjustment

Recruitment during the forecast years sometimes needs to be set at a level other than that determined by the spawner-recruitment curve. One way to do this is by an environmental or block effect on the regime shift parameter. A more straightforward approach is now provided by the special forecast recruitment feature described here. There are 4 options provided for this feature. These are:

This feature affects the expected recruitment in all years after the last year of the main recruitment deviations. This means that if the last year of main recruitment deviations is before end year, then the last few recruitments, termed “late”, are also affected by this forecast option. For example, option 3 would allow you to set the last 2 years of the time series and all forecast years to have recruitment equal to the mean recruitment for the last 10 years of the main recruitment era.

7 Data File

7.1 Overview of Data File

  1. Dimensions (years, ages, number of fleets, number of surveys, etc.)

  2. Fleet and survey names, timing, etc.

  3. Catch data (biomass or numbers)

  4. Discard totals or rate data

  5. Mean body weight or mean body length data

  6. Length composition set-up

  7. Length composition data

  8. Age composition set-up

  9. Age imprecision definitions

  10. Age composition data

  11. Mean length-at-age or mean bodyweight-at-age data

  12. Generalized size composition (e.g., weight frequency) data

  13. Environmental data

  14. Tag-recapture data

  15. Stock composition (e.g., morphs identified by otolith microchemistry) data

  16. Selectivity observations (new placeholder, not yet implemented)

7.2 Units of Measure

The normal units of measure are as follows:

7.3 Time Units

7.3.1 Seasons

Seasonal quantities in the model are calculated and treated in the following methods:

7.3.2 Subseasons and Timing of Events

The treatment of subseasons in SS3 provide more precision in the timing of events compared to earlier model versions. In early versions, v.3.24 and before, there was effectively only two subseasons per season because the alk for each observation used the mid-season mean length-at-age and spawning occurred at the beginning of a specified season.

Time steps can be broken into subseason and the alk can be calculated multiple times over the course of a year:

alk alk* alk* alk alk* alk
Subseason 1 Subseason 2 Subseason 3 Subseason 4 Subseason 5 Subseason 6
alk* only re-calculated when there is a survey that subseason

7.4 Terminology

The term COND appears in the “Typical Value” column of this documentation (it does not actually appear in the model files), it indicates that the following section is omitted except under certain conditions, or that the factors included in the following section depend upon certain conditions. In most cases, the description in the definition column is the same as the label output to the ss_new files.

7.5 Model Dimensions

Value Description
#V3.30.XX.XX Model version number. This is written by SS3 in the new files and a good idea to keep updated in the input files.
#C data using new survey Data file comment. Must start with #C to be retained then written to top of various output files. These comments can occur anywhere in the data file, but must have #C in columns 1-2.
1971 Start year
2001
1 Number of seasons per year
12 Vector with the number of months in each season. These do not need to be integers. Note: If the sum of this vector is close to 12.0, then it is rescaled to sum to 1.0 so that season duration is a fraction of a year. If the sum is not equal to 12.0, then the entered values are summed and rescaled to 1. So, with one season per year and 3 months per season, the calculated season duration will be 0.25, which allows a quarterly model to be run as if quarters are years. All rates in SS3 are calculated by season (growth, mortality, etc.) using annual rates and season duration.
2 The number of subseasons. Entry must be even and the minimum value is 2. This is for the purpose of finer temporal granularity in calculating growth and the associated alk.
Spawning month; spawning biomass is calculated at this time of year (1.5 means January 15) and used as basis for the total recruitment of all settlement events resulting from this spawning.
2 Number of sexes:
1 = current one sex, ignore fraction female input in the control file;
2 = current two sex, use fraction female in the control file; and
-1 = one sex and multiply the spawning biomass by the fraction female in the control file.
20 Number of ages. The value here will be the plus-group age. SS3 starts at age 0.
1 Number of areas
2 Total number of fishing and survey fleets (which now can be in any order).

7.6 Fleet Definitions

The catch data input has been modified to improve the user flexibility to add/subtract fishing and survey fleets to a model set-up. The fleet setup input is transposed so each fleet is now a row. Previous versions (v.3.24 and earlier) required that fishing fleets be listed first followed by survey only fleets. In SS3 all fleets have the same status within the model structure and each has a specified fleet type (except for models that use tag recapture data, this will be corrected in future versions). Available types are; catch fleet, bycatch only fleet, or survey.

Inputs that define the fishing and survey fleets:
2 Number of fleets which includes survey in any order
Fleet Type Timing Area Catch Units Catch Mult. Fleet Name
1 -1 1 1 0 FISHERY1
3 1 1 2 0 SURVEY1

7.6.0.1 Fleet Type


Define the fleet type (e.g., fishery fleet, survey fleet):

7.6.0.2 Timing


Timing for data observations:

7.6.0.3 Area


An integer value indicating the area in which a fleet operates.

7.6.0.4 Catch Units


Ignored for survey fleets, their units are read later:

See Units of Measure for more information.

7.6.0.5 Catch Multiplier


Invokes use of a catch multiplier, which is then entered as a parameter in the mortality-growth parameter section. The estimated value or fixed value of the catch multiplier is used to adjust the observed catch:

A catch multiplier can be useful when trying to explore historical unrecorded catches or ongoing illegal and unregulated catches. The catch multiplier is a full parameter line in the control file and has the ability to be time-varying.

7.7 Bycatch Fleets

The option to include bycatch fleets was introduced in v.3.30.10. This is an optional input and if no bycatch is to be included in to the catches this section can be ignored.

A fishing fleet is designated as a bycatch fleet by indicating that its fleet type is 2. A bycatch fleet creates a fishing mortality, same as a fleet of type 1, but a bycatch fleet has all catch discarded, so the input value for retained catch is ignored. However, an input value for retained catch is still needed to indicate that the bycatch fleet was active in that year and season. A catch multiplier cannot be used with bycatch fleets because catch multiplier works on retained catch. SS3 will expect that the retention function for this fleet will be set in the selectivity section to type 3, indicating that all selected catch is discarded dead. It is necessary to specify a selectivity pattern for the bycatch fleet and, due to generally lack of data, to externally derive values for the parameters of this selectivity.

All catch from a bycatch fleet is discarded, so one option to use a discard fleet is to enter annual values for the amount (not proportion) that is discarded in each time step. However, it is uncommon to have such data for all years. An alternative approach that has been used principally in the U.S. Gulf of Mexico is to input a time series of effort data for this fleet in the survey section (e.g., effort is a “survey” of \(F\), for example, the shrimp trawl fleet in the Gulf of Mexico catches and discards small finfish and an effort time series is available for this fleet) and to input in the discard data section an observation for the average discard over time using the super year approach. Another use of bycatch fleet is to use it to estimate effect of an external source of mortality, such as a red tide event. In this usage there may be no data on the magnitude of the discards and SS3 will then rely solely on the contrast in other data to attempt to estimate the magnitude of the red tide kill that occurred. The benefit of doing this as a bycatch fleet, and not a block on natural mortality, is that the selectivity of the effect can be specified.

Bycatch fleets are not expected to be under the same type of fishery management controls as the retained catch fleets included in the model. This means that when SS3 enters into the reference point equilibrium calculations, it would be incorrect to have SS3 re-scale the magnitude of the \(F\) for the bycatch fleet as it searches for the \(F\) that produces, for example, F35%. Related issues apply to the forecast. Consequently, a separate set of controls is provided for bycatch fleets (defined below). Input is required for each fleet designated as fleet type = 2.

If a fleet above was set as a bycatch fleet (fleet type = 2), the following line is required:

Bycatch fleet input controls:
a: b: c: d: e: f:
Fleet Index Include in msy \(F\text{mult}\) \(F\) or First Year Last Year Not used
2 2 3 1982 2010 0

The above example set-up defines one fleet (fleet number 2) as a bycatch fleet with the dead catch from this fleet to not be included in the search for msy (b: Include in msy = 2). The level of \(F\) from the bycatch fleet in reference point and forecast is set to the mean (c: \(F\text{mult}\) = 3) of the estimated \(F\) for the range of years from 1982-2010.

7.7.0.1 Fleet Index


Fleet number for which to include bycatch catch. Fleet number is assigned within the model based on the order of listed fleets in the Fleet Definition section. If there are multiple bycatch fleets, then a line for each fleet is required in the bycatch section.

7.7.0.2 Include in msy


The options are:

7.7.0.3 \(F\) Multiplier (\(F\text{mult}\))


The options are:

7.7.0.4 \(F\) or First Year


The specified \(F\) or first year for the bycatch fleet.

7.7.0.5 \(F\) or Last Year


The specified \(F\) or last year for the bycatch fleet.

7.7.0.6 Not Used


This column is not yet used and is reserved for future features.

7.7.0.7 Bycatch Fleet Usage Instructions and Warnings


When implementing a bycatch fleet, changes to both the data and control file are needed.

The needed changes to the data file are:

  1. Fleet type - set to value of 2.

  2. Set bycatch fleet controls per information above.

  3. Catch input - you must enter a positive value for catch in each year/season that you want a bycatch calculated. The entered value of catch will be ignored by SS3, it is just a placeholder to invoke creating an \(F\).

    1. Initial equilibrium - you may want to enter the bycatch amount as retained catch for the initial equilibrium year because there is no option to enter initial equilibrium discard in the discard section.

  4. Discard input - It is recommended to enter the amount of discard to assist SS3 in estimating the \(F\) for the bycatch fleet.

  5. Survey input - It is useful, but not absolutely necessary, to enter the effort time series by the bycatch fleet to assist SS3 in estimating the annual changes in \(F\) for the bycatch fleet.

The needed changes to the control file are:

  1. The \(F\) method must be set to 2 in order for SS3 to estimate \(F\) with having information on retained catch.

  2. Selectivity -

    1. A selectivity pattern must be specified and fixed (or estimated if composition data is provided).

    2. The discard column of selectivity input must be set to a value of 3 to cause all catch to be discarded.

In v.3.30.14 it was identified that there can be an interaction between the use of bycatch fleets and the search for the \(F_{0.1}\) reference point which may results in the search failing. Changes to the search feature were implemented to make the search more robust, however, issue may still be encountered. In these instances it is recommended to not select the \(F_{0.1}\) reference point calculation in the forecast file.

7.8 Predator Fleets

Introduced in v.3.30.18, a predator fleet provides the capability to define an entity as a predator that adds additional mortality (\(M2\), i.e., the predation mortality) to the base natural mortality. This new capability means that previous use of bycatch fleets to mimic predators (or fish kills, e.g., due to red tide) will no longer be necessary. The problem with using a bycatch fleet as a predator was that it still created an \(F\) that was included in the reporting of total \(F\) even if the bycatch was not included in the msy search.

For each fleet that is designated as a predator, a new parameter line is created in the mg parameter section in the control file. This parameter will have the label M2_pred1, where the “1” is the index for the predator (not the index of the fleet being used as a predator). More than one predator can be included. If the model has > 1 season, it is normal to expect \(M2\) to vary seasonally. Therefore, only if the number of seasons is greater than 1, follow each \(M2\) parameter with number of season parameters to provide the seasonal multipliers. These are simple multipliers times \(M2\), so at least one of these needs to have a non-estimated value. The set of multipliers can be used to set \(M2\) to only operate in one season if desired. If there is more than one predator fleet, each will have its own seasonal multipliers. If there is only 1 season in the model, then no multiplier lines are included.

Three types of data relevant to \(M2\) can be input:

With the input of data on the time series of total kill or predator effort, it should be possible to estimate annual deviations around the base \(M2\) for years with data. If the \(M2\) time series is instead driven by environmental data, then also including data on kill or effort can provide a means to view consistency between the environmental time series and the additional data sets. Output of \(M2\) is found in a Report.sso section labeled predator (\(M2\)). In the example below, the \(M2\) seasonal multiplier was defined to have random deviations by year. This allowed multipliers plus \(M2\) itself to closely match the input consumption amounts (288 mt of consumption per season, the fit can be examined by looking at the discard output report).

7.9 Catch

After reading the fleet-specific indicators, a list of catch values by fleet and season are read in by the model. The format for the catches is year and season that the catch is attributed to, fleet, a catch value, and a year-specific catch standard error. Only positive catches need to be entered, so there is no need for records corresponding to all years and fleets. To include an equilibrium catch value for a fleet and season, the year should be noted as -999. For each non-zero equilibrium catch value included, a short parameter line is required in the initial \(F\) section of the control file.

There is no longer a need to specify the number of records to be read; instead the list is terminated by entering a record with the value of -9999 in the year field. The updated list based approach extends throughout the data file (e.g., catch, length- and age-composition data), the control file (e.g., lambdas), and the forecast file (e.g., total catch by fleet, total catch by area, allocation groups, forecasted catch).

In addition, it is possible to collapse the number of seasons. So, if a season value is greater than the number of seasons for a particular model, that catch is added to the catch for the final season. This is one way to easily collapse a seasonal model into an annual model. The alternative option is to the use of season = 0. This will cause SS3 to distribute the input value of catch equally among the number of seasons. SS3 assumes that catch occurs continuously over seasons and hence is not specified as month in the catch data section. However, all other data types will need to be specified by month.

The format for a 2 season model with 2 fisheries looks like the table below. Example is sorted by fleet, but the sort order does not matter. In data.ss_new, the sort order is fleet, year, season.

Catches by year, season for every fleet:
Year Season Fleet Catch Catch se
-999 1 1 56 0.05
-999 2 1 62 0.05
1975 1 1 876 0.05
1975 2 1 343 0.05
... ... ... ... ...
... ... ... ... ...
-999 1 2 55 0.05
-999 2 2 22 0.05
1975 1 2 555 0.05
1975 2 2 873 0.05
... ... ... ... ...
... ... ... ... ...
-9999 0 0 0 0

7.10 Surveys and Indices

Indices are data that are compared to aggregate quantities in the model. Typically, the index is a measure of selected fish abundance, but this data section also allows for the index to be related to a fishing fleet’s \(F\), or to another quantity estimated by the model. The first section of the “Indices” setup contains the fleet number, units, error distribution, and whether additional output (sd Report) will be written to the Report file for each fleet that has index data.

cpue and Survey Abundance Observations:
Fleet/ Error
Survey Units Distribution sd Report
1 1 0 0
2 1 0 0
... ... ... ...

7.10.0.1 Units


The options for units for input data are:

7.10.0.2 Error Distribution


The options for error distribution form are:

Abundance indices typically assumed to have a log-normal error structure with units of se of \(ln_{e}\)(index). If the variance of the observations is available only as a cv (se of the observation divided by the mean value of the observation in natural space), then the value of standard error in natural log space can be calculated as \(\sqrt{(ln_e(1+(CV)^2))}\).

For the normal error structure, the entered values for se are interpreted directly as a se in arithmetic space and not as a cv. Thus switching from a log-normal to a normal error structure forces the user to provide different values for the se input in the data file.

If the data exist as a set of normalized Z-scores, you can assert a log-normal error structure after entering the data as \(exp(Z-score)\) because it will be logged by SS3. Preferably, the Z-scores would be entered directly and the normal error structure would be used.

7.10.0.3 Enable glssd Report


Indices with sd Report enabled will have the expected values for their historical values appear in the ss.std and ss.cor files. The default value is for this option is 0.

7.10.0.4 Data Format


Year Month Fleet/Survey Observation se
1991 7 3 80000 0.056
1995 7.2 3 65000 0.056
... ... ... ... ...
2000 7.1 3 42000 0.056
-9999 0 0 0 0

7.11 Discard

If discard is not a feature of the model specification, then just a single input is needed:

0 Number of fleets with discard observations

If discard is being used, the input syntax is:

1 Number of fleets with discard observations
Fleet Units Error Distribution
1 2 -1
Year Month Fleet Observation se
1980 7 1 0.05 0.25
1991 7 1 0.10 0.25
-9999 0 0 0 0

Note that although the user must specify a month for the observed discard data, the unit for discard data is in terms of a season rather than a specific month. So, if using a seasonal model, the input month values must correspond to some time during the correct season. The actual value will not matter because the discard amount is calculated for the entirety of the season. However, discard length or age observations will be treated by entered observation month.

7.11.0.1 Discard Units


The options are:

7.11.0.2 Discard Error Distribution


The four options for discard error are:

7.11.0.3 Discard Notes


7.11.0.4 Cautionary Note


The use of cv as the measure of variance can cause a small discard value to appear to be overly precise, even with the minimum se of the discard observation set to 0.001. In the control file, there is an option to add an extra amount of variance. This amount is added to the se, not to the cv, to help correct this problem of underestimated variance.

7.12 Mean Body Weight or Length

This is the overall mean body weight or length across all selected sizes and ages. This may be useful in situations where individual fish are not measured but mean weight is obtained by counting the number of fish in a specified sample (e.g., a 25 kg basket).

Mean Body Weight Data Section:
1 Use mean body size data (0/1)
COND > 0:
30 Degrees of freedom for Student’s t-distribution used to evaluate mean body
weight deviation.
Year Month Fleet Partition Type Observation cv
1990 7 1 0 1 4.0 0.95
1990 7 1 0 1 1.0 0.95
-9999 0 0 0 0 0 0

7.12.0.1 Partition


Mean weight data and composition data require specification of what group the sample originated from (e.g., discard, retained, discard + retained). Note: if retention is not defined in the selectivity section, observations with Partition = 2 will be changed to Partition = 0.

7.12.0.2 Type


Specify the type of data:

7.12.0.3 Observation - Units


Units must correspond to the units of body weight, normally in kg, (or mean length in cm). The expected value of mean body weight (or mean length) is calculated in a way that incorporates effect of selectivity and retention.

7.12.0.4 Error


Error is entered as the cv of the observed mean body weight (or mean length)

7.13 Population Length Bins

The first part of the length composition section sets up the bin structure for the population. These bins define the granularity of the alk and the coarseness of the length selectivity. Fine bins create smoother distributions, but a larger and slower running model. First read a single value to select one of three population length bin methods, then any conditional input for options 2 and 3:

1 Use data bins to be read later. No additional input here.
2 generate from bin width min max, read next:
2 Bin width
10 Lower size of first bin
82 Lower size of largest bin
3 Read 1 value for number of bins, and then read vector of bin boundaries
37 Number of population length bins to be read
10 12 14 ... 82 Vector containing lower edge of each population size bin

7.13.0.1 Notes


There are some items for users to consider when setting up population length bins:

7.14 Length Composition Data Structure

Enter a code to indicate whether length composition data will be used:
1 Use length composition data (0/1/2)

If the value 0 is entered, then skip all length related inputs below and skip to the age data setup section. If value 1 is entered, all data weighting options for composition data apply equally to all partitions within a fleet. If the value 2 is entered, then the data weighting options are applied by the partition specified. Note that the partitions must be entered in numerical order within each fleet.

If the value for fleet is negative, then the vector of inputs is copied to all partitions (0 = combined, 1 = discard, and 2 = retained) for that fleet and all higher numbered fleets. This as a good practice so that the user controls the values used for all fleets.

Example table of length composition settings when “Use length composition data” = 1
(where here the first fleet has multinomial error structure with no associated parameter,
and the second fleet uses Dirichlet-multinomial structure):
Min. Constant Combine Comp. Min.
Tail added males & Compress. Error Param. Sample
Compress. to prop. females Bins Dist. Select Size
0 0.0001 0 0 0 0 0.1
0 0.0001 0 0 1 1 0.1
Example table of length composition settings when “Use length composition data” = 2
(where here the -1 in the fleet column applies the first parameter to all partitions
for fleet 1 while fleet 2 has separate parameters for discards and retained fish):
Min. Constant Combine Comp. Min.
Tail added males & Compress. Error Param. Sample
Fleet Partition Compress. to prop. females Bins Dist. Select Size
-1 0 0 0.0001 0 0 1 1 0.1
2 1 0 0.0001 0 0 1 2 0.1
2 2 0 0.0001 0 0 1 3 0.1
...
-9999 0 0 0 0 0 0 0 0

7.14.0.1 Minimum Tail Compression


Compress tails of composition until observed proportion is greater than this value; negative value causes no compression; Advise using no compression if data are very sparse, and especially if the set-up is using age composition within length bins because of the sparseness of these data. A single fish being observed with tail compression on will cause the entire vector to be collapsed to that bin.

7.14.0.2 Added Constant to Proportions


Constant added to observed and expected proportions at length and age to make log likelihood calculations more robust. Tail compression occurs before adding this constant. Proportions are re-normalized to sum to 1.0 after constant is added.

The constant should be greater than 0. Commonly used values range from 0.00001 to 0.01. Larger values will cause differences among bins with smaller values to be less influential, leading to greater relative influence of the bins with the largest proportions of the compositions.

7.14.0.3 Combine Males & Females


Combine males into females at or below this bin number. This is useful if the sex determination of very small fish is doubtful so allows the small fish to be treated as combined sex. If Combine Males & Females > 0, then add males into females for bins 1 through this number, zero out the males, set male data to start at the first bin above this bin. Note that Combine Males & Females > 0 is entered as a bin index, not as the size associated with that bin. Comparable option is available for age composition data.

7.14.0.4 Compress Bins


This option allows for the compression of length or age bins beyond a specific length or age by each data source. As an example, a value of 5 in the compress bins column would condense the final five length bins for the specified data source.

7.14.0.5 Composition Error Distribution


The options are:

7.14.0.6 Parameter Select


Value that indicates the groups of composition data for estimation of the Dirichlet parameter for weighting composition data.

7.14.0.7 Minimum Sample Size


The minimum value (floor) for all sample sizes. This value must be at least 0.001. Conditional age-at-length data may have observations with sample sizes less than 1. Version 3.24 had an implicit minimum sample size value of 1.

7.14.0.8 Additional information on Dirichlet Parameter Number and Effective Sample Sizes


If the Dirichlet-multinomial error distribution is selected, indicate here which of a list of Dirichlet-multinomial parameters will be used for this fleet. So each fleet could use a unique Dirichlet-multinomial parameter, or all could share the same, or any combination of unique and shared. The requested number of Dirichlet-multinomial parameters are specified as parameter lines in the control file immediately after the selectivity parameter section. Please note that age-compositions Dirichlet-multinomial parameters are continued after length-compositions, so a model with one fleet and both data types would presumably require two new Dirichlet-multinomial parameters.

The Dirichlet estimates the effective sample size as \(N_{eff}=\frac{1}{1+\theta}+\frac{N\theta}{1+\theta}\) where \(\theta\) is the estimated parameter and \(N\) is the input sample size. Stock Synthesis estimates the natural log of the Dirichlet-multinomial parameter such that \(\hat{\theta}_{\text{fishery}} = e^{-0.6072} = 0.54\) where assuming \(N=100\) for the fishery would result in an effective sample size equal to 35.7.

This formula for effective sample size implies that, as the Stock Synthesis parameter \(ln(DM\text{\_theta})\) goes to large values (i.e., 20), then the adjusted sample size will converge to the input sample size. In this case, small changes in the value of the \(ln(DM\text{\_theta})\) parameter has no action, and the derivative of the negative log likelihood is zero with respect to the parameter, which means the Hessian will be singular and cannot be inverted. To avoid this non-invertible Hessian when the \(ln(DM\text{\_theta})\) parameter becomes large, turn it off while fixing it at the high value. This is equivalent to turning off down-weighting of fleets where evidence suggests that the input sample sizes are reasonable.

For additional information about the Dirichlet-multinomial please see Thorson et al. (2017) and the detailed Data Weighting section.

7.15 Length Composition Data

Composition data can be entered as proportions, numbers, or values of observations by length bin based on data expansions.

The data bins do not need to cover all observed lengths. The selection of data bin structure should be based on the observed distribution of lengths and the assumed growth curve. If growth asymptotes at larger lengths, having additional length bins across these sizes may not contribute information to the model and may slow model run time. Additionally, the lower length bin selection should be selected such that, depending on the size selection, to allow for information on smaller fish and possible patterns in recruitment. While set separately users should ensure that the length and age bins align. It is recommended to explore multiple configurations of length and age bins to determine the impact of this choice on model estimation.

Specify the length composition data as:

28 Number of length bins for data
26 28 30 ... 80 Vector of length bins associated with the length data

Note: the vector of length bins above will aggregate data from outside the range of values as follows:

bin 1 bin 2 bin 3 ... bin 27 bin 28
bin vector 26 28 30 ... 78 80
bin contains 0–27.99 28–29.99 30–30.99 ... 78–79.99 80+

Example of a single length composition observation:

Year Month Fleet Sex Partition Nsamp data vector
1986 1 1 3 0 20 <female then male data>
... ... ... ... ... ... ...
-9999 0 0 0 0 0 <0 repeated for each element of the data vector above>

7.15.0.1 Sex


If model has only one sex defined in the set-up, all observations must have sex set equal to 0 or 1 and the data vector by year will equal the number of the user defined data bins. This also applies to the age data.

In a 2 sex model, the data vector always has female data followed by male data, even if only one of the two sexes has data that will be used. The below description applies to a 2 sex model:

7.15.0.2 Partition


Partition indicates samples from either combined, discards, or retained catch. Note: if retention is not defined in the selectivity section, observations with Partition = 2 will be changed to Partition = 0.

7.15.0.3 Excluding Data


7.15.0.4 Note


When processing data to be input into SS3, all observed fish of sizes smaller than the first bin should be added to the first bin and all observed fish larger than the last bin should be condensed into the last bin.

The number of length composition data lines no longer needs to be specified in order to read the length (or age) composition data. Starting in v.3.30, the model will continue to read length composition data until a pre-specified exit line is read. The exit line is specified by entering -9999 at the end of the data matrix. The -9999 indicates to the model the end of length composition lines to be read.

Each observation can be stored as one row for ease of data management in a spreadsheet and for sorting of the observations. However, the 6 header values, the female vector and the male vector could each be on a separate line because admb reads values consecutively from the input file and will move to the next line as necessary to read additional values.

The composition observations can be in any order and replicate observations by a year for a fleet are allowed (unlike survey and discard data). However, if the super-period approach is used, then each super-periods’ observations must be contiguous in the data file.

7.16 Age Composition Option

The age composition section begins by reading the number of age bins. If the value 0 is entered for the number of age bins, then skips reading the bin structure and all reading of other age composition data inputs.

17 Number of age bins; can be equal to 0 if age data are not used; do not include a vector of age bins if the number of age bins is set equal to 0.

7.16.1 Age Composition Bins

If a positive number of age bins is read, then reads the bin definition next.

1 2 3 ... 20 25 Vector of ages

The bins are in terms of observed age (here age) and entered as the lower edge of each bin. Each ageing imprecision definition is used to create a matrix that translates true age structure into age structure. The first and last age’ bins work as accumulators. So in the example any age 0 fish that are caught would be assigned to the age = 1 bin.

7.16.2 Ageing Error

Here, the capability to create a distribution of age (e.g., age with possible bias and imprecision) from true age is created. One or many ageing error definitions can be created. For each, the model will expect an input vector of mean age and a vector of standard deviations associated with the mean age.

2 Number of ageing error matrices to generate
Example with no bias and very little uncertainty at age:
Age-0 Age-1 Age-2 ... Max Age
-1 -1 -1 ... -1 #Mean Age
0.001 0.001 0.001 ... 0.001 #SD
Example with no bias and some uncertainty at age:
0.5 1.5 2.5 ... Max Age + 0.5 #Mean Age
0.5 0.65 0.67 ... 4.3 #SD Age
Example with bias and uncertainty at age:
0.5 1.4 2.3 ... Max Age + Age Bias #Mean Age
0.5 0.65 0.67 ... 4.3 #SD Age

In principle, one could have year or laboratory specific matrices for ageing error. For each matrix, enter a vector with mean age for each true age; if there is no ageing bias, then set age equal to true age + 0.5. Alternatively, -1 value for mean age means to set it equal to true age plus 0.5. The addition of + 0.5 is needed so that fish will get assigned to the intended integer age. The length of the input vector is equal to the population maximum age plus one (0-max age), with the first entry being for age 0 fish and the last for fish of population maximum age even if the maximum age bin for the data is lower than the population maximum age. The following line is a vector with the standard deviation of age for each true age with a normal distribution assumption.

The model is able to create one ageing error matrix from parameters, rather than from an input vector. The range of conditions in which this new feature will perform well has not been evaluated, so it should be considered as a preliminary implementation and subject to modification. To invoke this option, for the selected ageing error vector, set the standard deviation of ageing error to a negative value for age 0. This will cause creation of an ageing error matrix from parameters and any age or size-at-age data that specify use of this age error pattern will use this matrix. Then in the control file, add a full parameter line below the cohort growth deviation parameter (or the movement parameter lines if used) in the mortality growth parameter section. These parameters are described in the control file section of this manual.

Code for ageing error calculation can be found in SS_miscfxn.tpl, search for function “get_age_age” or “SS_Label_Function 45”.

7.16.3 Age Composition Specification

If age data are included in the model, the following set-up is required, similar to the length data section. See Length Composition Data Structure for details on each of these inputs.

Specify bin compression and error structure for age composition data for each fleet:
Min. Constant Combine Comp. Min.
Tail added males & Compress. Error Param. Sample
Compress. to prop. females Bins Dist. Select Size
0 0.0001 1 0 0 0 1
0 0.0001 1 0 0 0 1
Specify method by which length bin range for age obs will be interpreted:
1 Bin method for age data
1 = value refers to population bin index
2 = value refers to data bin index
3 = value is actual length (which must correspond to population length bin
boundary)
An example age composition observation:
Year Month Fleet Sex Partition Age Err Lbin lo Lbin hi Nsamp Data Vector
1987 1 1 3 0 2 -1 -1 79 <enter data values>
-9999 0 0 0 0 0 0 0 0 0

Syntax for Sex, Partition, and data vector are same as for length. The data vector has female values then male values, just as for the length composition data.

7.16.3.1 Age Error


Age error (Age Err) identifies which ageing error matrix to use to generate expected value for this observation.

7.16.3.2 Lbin Low and Lbin High


Lbin lo and Lbin hi are the range of length bins that this age composition observation refers to. Normally these are entered with a value of -1 and -1 to select the full size range. Whether these are entered as population bin number, length data bin number, or actual length is controlled by the value of the length bin range method above.

7.16.3.3 Excluding Data


As with the length composition data, a negative year value causes the observation to not be read into the working matrix, a negative value for fleet causes the observation to be included in expected values calculation, but not in contribution to total log likelihood, a negative value for month causes start-stop of super-period.

7.17 Conditional Age-at-Length

Use of conditional age-at-length will greatly increase the total number of age composition observations and associated model run time, but there can be several advantages to inputting ages in this fashion. First, it avoids double use of fish for both age and size information because the age information is considered conditional on the length information. Second, it contains more detailed information about the relationship between size and age so provides stronger ability to estimate growth parameters, especially the variance of size-at-age. Lastly, where age data are collected in a length-stratified program, the conditional age-at-length approach can directly match the protocols of the sampling program.

However, simulation research has shown that the use of conditional age-at-length data can result in biased growth estimates in the presence of unaccounted for age-based movement when length-based selectivity is assumed (H. H. Lee et al. 2017), when other age-based processes (e.g., mortality) are not accounted for (H. Lee et al. 2019), or based on the age sampling protocol (Piner, Lee, and Maunder 2016). Understanding how data are collected (e.g., random, length-conditioned samples) and the biology of the stock is important when using conditional age-at-length data for a fleet.

In a two sex model, it is best to enter these conditional age-at-length data as single sex observations (sex = 1 for females and = 2 for males), rather than as joint sex observations (sex = 3). Inputting joint sex observations comes with a more rigid assumption about sex ratios within each length bin. Using separate vectors for each sex allows 100% of the expected composition to be fit to 100% observations within each sex, whereas with the sex = 3 option, you would have a bad fit if the sex ratio were out of balance with the model expectation, even if the observed proportion at age within each sex exactly matched the model expectation for that age. Additionally, inputting the conditional age-at-length data as single sex observations isolates the age composition data from any sex selectivity as well.

Conditional age-at-length data are entered within the age composition data section and can be mixed with marginal age observations for other fleets of other years within a fleet. To treat age data as conditional on length, Lbin_lo and Lbin_hi are used to select a subset of the total size range. This is different from setting Lbin_lo and Lbin_hi both to -1 to select the entire size range, which treats the data entered on this line within the age composition data section as marginal age composition data.

An example conditional age-at-length composition observations:
Year Month Fleet Sex Partition Age Err Lbin lo Lbin hi Nsamp Data Vector
1987 1 1 1 0 2 10 10 18 <data values>
1987 1 1 1 0 2 12 12 24 <data values>
1987 1 1 1 0 2 14 14 16 <data values>
1987 1 1 1 0 2 16 16 30 <data values>
-9999 0 0 0 0 0 0 0 0 0

In this example observation, the age data is treated as on being conditional on the 2 cm length bins of 10–11.99, 12–13.99, 14–15.99, and 16–17.99 cm. If there are no observations of ages for a specific sex within a length bin for a specific year, that entry may be omitted.

7.18 Mean Length or Body Weight-at-Age

The model also accepts input of mean length-at-age or mean body weight-at-age. This is done in terms of observed age, not true age, to take into account the effects of ageing imprecision on expected mean size-at-age. If the value of the Age Error column is positive, then the observation is interpreted as mean length-at-age. If the value of the Age Error column is negative, then the observation is interpreted as mean body weight-at-age and the abs(Age Error) is used as Age Error.

1 Use mean size-at-age observation (0 = none, 1 = read data matrix)
An example observation:
Age Data Vector Sample Size
Yr Month Fleet Sex Part. Err. Ignore (Female - Male) (Female - Male)
1989 7 1 3 0 1 999 <Mean Size values> <Sample Sizes>
...
-9999 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

7.18.0.1 Note


7.19 Environmental Data

The model accepts input of time series of environmental data. Parameters can be made to be time-varying by making them a function of one of these environmental time series. In v.3.30.16 the option to specify the centering of environmental data by either using the mean of the by mean and the z-score.

Parameter values can be a function of an environmental data series:
1 Number of environmental variables
The environmental data can be centered by subtracting the mean and dividing by sd
(z-score, -1) or by subtracting the mean of the environmental variable (-2) based on
the year column value.
COND > 0 Example of 2 environmental observations:
Year Variable Value
1990 1 0.10
1991 1 0.15
-1 1 1
-2 2 1
-9999 0 0

The final two lines in the example above indicate in that variable series 1 will be centered by subtracting the mean and dividing by the sd (indicated by the -1 value in the year column). The environmental variable series 2 will be centered by subtracting the mean of the time series (indicated by the -2 value in the year column). The input in the “value” column for both of the final two lines specifying the centering of the time series is ignored by the model. The control file also will need to be modified to in the long parameter line column “env-var” for the selected parameter. This feature was added in v.3.30.16.

7.19.0.1 Note


7.20 Generalized Size Composition Data

The generalized approach to size composition information was designed initially to provide a means to include weight frequency data. However, the uses are broader, such as allowing for size composition data with different data bins. The user can define as many generalized size composition methods as necessary.

Example entry:
2 Number (N) of size frequency methods to be read. If this value is 0, then omit all entries below. A value of -1 (or any negative value) triggers expanded optional inputs below that allow for Dirichlet for fitting these data.
COND < 0 - Number of size frequency
2 Number of size frequency methods to read
END COND < 0
25 15 Number of bins per method
2 2 Units per each method (1 = biomass, 2 = numbers)
3 3 Scale per each method (1 = kg, 2 = lbs, 3 = cm, 4 = inches)
1e-9 1e-9 Min compression to add to each observation (entry for each method)
2 2 Number of observations per weight frequency method
COND < 0 - Number of size frequency
1 1 Composition error structure (0 = multinomial, 1 = Dirichlet using Theta*n, 2 = Dirichlet using beta)
1 1 Parameter select consecutive index for Dirichlet composition error
END COND < 0
Then enter the lower edge of the bins for each method. The two row vectors shown
below contain the bin definitions for methods 1 and 2 respectively:
-26 28 30 32 34 36 38 40 42 ... 60 62 64 68 72 76 80 90
-26 28 30 32 34 36 38 40 42 44 46 48 50 52 54

Example input is shown below. Note that the format is identical to the length composition data, including sex and partition options, except for the addition of the first column, which indicates the size frequency method.

Sample <composition
Method Year Month Fleet Sex Part Size females then males>
1 1975 1 1 3 0 43 <data>
1 1977 1 1 3 0 43 <data>
1 1979 1 1 3 0 43 <data>
1 1980 1 1 3 0 43 <data>

7.20.0.1 Note


7.21 Tag-Recapture Data

Each released tag group is characterized by an area, time, sex and age at release. Each recapture event is characterized by a time and fleet (since fleets operate in only one area, it is not necessary to specify the area of recapture). Fleets with tagging data must be fishing fleets (e.g., fleet type 1 or 2).

Inside the model, the tagged cohort is apportioned across all growth patterns in a given area at a given time (with options to apportion to only one sex or to both). The tag cohort by growth pattern then behaves according to the movement and mortality of the growth pattern. The number of tagged fish is modeled as a negligible fraction of the total population, so a tagging event does not move fish from an untagged group to a tagged group. Instead, tagged fish are seeded into the population with no impact at all on the total population abundance or mortality.

Predominant age at release for each tag group must be assigned; this requirement keeps SS3 efficient. By assigning a tag group to a single age rather than distributing it across all possible ages according to the size composition of the release group, the tag group can be tracked as a single cohort through the age by time matrix with minimal overhead to the rest of the model. Tags are released at the beginning of a season and recaptures follow the timing of the fleet that made the recapture.

Example set-up for tagging data:
1 Do tags - 0/1/2. If this value is 0, then omit all entries below.
If value is 2, read 1 additional input.
COND > 0 All subsequent tag-recapture entries must be omitted if “Do Tags” = 0
3 Number of tag groups
7 Number of recapture events
2 Mixing latency period: N periods to delay before comparing observed
to expected recoveries (0 = release period).
10 Max periods (seasons) to track recoveries, after which tags enter
accumulator
COND = 2
2 Minimum recaptures. The number of recaptures \(>=\) maxperiod must be
\(>=\) min tags recaptured specified to include tag group in log likelihood
Release Data
TG Area Year Season <tfill> Sex Age N Release
1 1 1980 1 999 0 24 2000
2 1 1995 1 999 1 24 1000
3 1 1985 1 999 2 24 10
Recapture Data
TG Year Season Fleet Number
1 1982 1 1 7
1 1982 1 2 5
1 1985 1 2 0
2 1997 1 1 6
2 1997 2 1 4
3 1986 1 1 7
3 1986 2 1 5

7.21.0.1 Note


7.22 Stock (Morph) Composition Data

It is sometimes possible to observe the fraction of a sample that is composed of fish from different stocks. These data could come from genetics, otolith microchemistry, tags, or other means. The growth pattern feature allows definition of cohorts of fish that have different biological characteristics and which are independently tracked as they move among areas. SS3 now incorporates the capability to calculate the expected proportion of a sample of fish that come from different growth patterns, “morphs”. In the inaugural application of this feature, there was a 3 area model with one stock spawning and recruiting in area 1, the other stock in area 3, then seasonally the stocks would move into area 2 where stock composition observations were collected, then they moved back to their natal area later in the year.

Stock composition by growth pattern (morph) data can be entered in as follows:
1 Do morph composition, if zero, then do not enter any further input below.
COND = 1
3 Number of observations
2 Number of morphs
0.0001 Minimum Compression
Year Month Fleet Null Nsamp Data by N Morphs
1980 1 1 0 36 0.4 0.6
1981 1 1 0 40 0.44 0.54
1982 1 1 0 50 0.37 0.63

7.22.0.1 Note


7.23 Selectivity Empirical Data (future feature)

It is sometimes possible to conduct field experiments or other studies to provide direct information about the selectivity of a particular length or age relative to the length or age that has peak selectivity, or to have a prior for selectivity that is more easily stated than a prior on a highly transformed selectivity parameter. This section provides a way to input data that would be compared to the specified derived value for selectivity. This is a placeholder at this time, required to include in the data file and will be fully implemented soon.

Selectivity data feature is under development for a future option and is not yet implemented.
The input line still must be specified in as follows:
0 Do data read for selectivity (future option)
End of Data File
999 #End of data file marker

7.24 Excluding Data

Data that are before the model start year or greater than the retrospective year are not moved into the internal working arrays at all. So if you have any alternative observations that are used in some model runs and not in others, you can simply give them a negative year value rather than having to comment them out. The first output to data.ss_new has the unaltered and complete input data. Subsequent reports to data.ss_new produce expected values or bootstraps only for the data that are being used. Additional information on bootstrapping is available in Bootstrap Data Files Section.

Data that are to be included in the calculations of expected values, but excluded from the calculation of negative log likelihood, are flagged by use of a negative value for fleet number.

7.25 Data Super-Periods

The super-period capability allows the user to introduce data that represent a blend across a set of time steps and to cause the model to create an expected value for this observation that uses the same set of time steps. The option is available for all types of data and a similar syntax is used.

All super-period observations must be contiguous in the data file. All but one of the observations in the sequence will have a negative value for fleet ID so the data associated with these dummy observations will be ignored. The observed values must be combined outside the model and then inserted into the data file for the one observation with a positive fleet number.

Super-periods are started with a negative value for month, and then stopped with a negative value for month, observations within the super-period are designated with a negative fleet field. The standard error or input sample size field is now used for weighting of the expected values. An error message is generated if the super-period does not contain one observation with a positive fleet field.

An expected value for the observation will be computed for each selected time period within the super-period. The expected values are weighted according to the values entered in the se (or input sample size) field for all observations except the single observation holding the combined data. The expected value for that year gets a relative weight of 1.0. So in the example below, the relative weights are: 1982, 1.0 (fixed); 1983, 0.85; 1985, 0.4; 1986, 0.4. These weights are summed and rescaled to sum to 1.0, and are output in the echoinput.sso file.

Not all time steps within the extent of a super-period need be included. For example, in a three season model, a super-period could be set up to combine information from season 2 across 3 years, e.g., skip over the season 1 and season 3 for the purposes of calculating the expected value for the super-period. The key is to create a dummy observation (negative fleet value) for all time steps, except 1, that will be included in the super-period and to include one real observation (positive fleet value; which contains the real combined data from all the specified time steps).

Super-period example:
Year Month Fleet Obs se Comment
1982 -2 3 34.2 0.3 Start super-period. This observation has positive fleet value, so is expected to contain combined data from all identified periods of the super-period. The se entered here is use as the se of the combined observation. The expected value for the survey in 1982 will have a relative weight of 1.0 (default) in calculating the combined expected value.
1983 2 -3 55 0.3 In super-period; entered observation is ignored. The expected value for the survey in 1983 will have a relative weight equal to the value in the se field (0.3) in calculating the combined expected value.
1985 2 -3 88 0.40 Note that 1984 is not included in the super-period. Relative weight for 1985 is 0.4
1986 -2 -3 88 0.40 End super-period

A time step that is within the time extent of the super-period can still have its own separate observation. In the above example, the survey observation in 1984 could be entered as a separate observation, but it must not be entered inside the contiguous block of super-period observations. For composition data (which allow for replicate observations), a particular time steps’ observations could be entered as a member of a super-period and as a separate observation.

The super-period concept can also be used to combine seasons within a year with multiple seasons. This usage could be preferred if fish are growing rapidly within the year so their effective age selectivity is changing within year as they grow; fish are growing within the year so fishery data collected year round have a broader size-at-age modes than a mid-year model approximation can produce; and it could be useful in situations with very high fishing mortality.

8 Control File

8.1 Overview of Control File

These listed model features are denoted in the control file in the following order:

  1. Number of growth patterns and platoons

  2. Design matrix for assignment of recruitment to area/settlement event/growth pattern

  3. Design matrix for movement between areas

  4. Definition of time blocks that can be used for time-varying parameters

  5. Controls far all time-varying parameters

  6. Specification for growth and fecundity

  7. Natural mortality growth parameters, weight-at-length, maturity, and fecundity, for each sex

  8. Hermaphroditism parameter line (if used)

  9. Recruitment distribution parameters for each area, settlement event, and growth pattern

  10. Cohort growth deviation

  11. Movement between areas (if used)

  12. Age error parameter line (if used)

  13. Catch multiplier (if used)

  14. Fraction female

  15. Setup for any mortality-growth parameters are time-varying

  16. Seasonal effects on biology parameters

  17. Spawner-recruitment parameters

  18. Setup for any stock recruitment parameters are time-varying

  19. Recruitment deviations

  20. \(F\) ballpark value in specified year

  21. Method for calculating fishing mortality (\(F\))

  22. Initial equilibrium \(F\) for each fleet

  23. Catchability (Q) setup for each fleet and survey

  24. Catchability parameters

  25. Setup for any catchability parameters are time-varying

  26. Length selectivity, retention, discard mortality setup for each fleet and survey

  27. Age selectivity setup for each fleet and survey

  28. Parameters for length selectivity, retention, discard mortality for each fleet and survey

  29. Parameters for age selectivity, retention, discard mortality for each fleet and survey

  30. Setup for any selectivity parameters that are time-varying

  31. Tag-recapture parameters

  32. Variance adjustments

  33. Lambdas for likelihood components

The order in which they appear in the control file has grown over time rather opportunistically, so it may not appear particularly logical at this time, especially various aspects of recruitment distribution and growth.

8.2 Parameter Line Elements

The primary role of the control file is to define the parameters to be used by the model. The general syntax of the 14 elements of a long parameter line is described here. If used, time-varying parameter lines use only the first seven elements of a parameter line and will be referred to as a short parameter line. Three types of time-varying properties can be applied to a base parameter: blocks or trend, environmental linkage, and random deviation. Each parameter line contains:

Column Element Description
1 LO Minimum value for the parameter
2 HI Maximum value for the parameter
3 INIT Initial value for the parameter. If the phase (described below) for the parameter is negative the parameter is fixed at this value. If the ss3.par file is read, it overwrites these INIT values.
4 PRIOR Expected value for the parameter. This value is ignored if the prior type is 0 (no prior) or 1 (symmetric beta). If the selected prior type (described below) is log-normal, this value is entered in natural log space.
5 PRIOR sd sd for the prior, used to calculate likelihood of the current parameter value. This value is ignored if prior type is 0. The sd is in regular space regardless of the prior type.
6 PRIOR TYPE 0 = none;
1 = symmetric beta;
2 = full beta;
3 = log-normal without bias adjustment;
4 = log-normal with bias adjustment;
5 = gamma; and
6 = normal.
7 PHASE Phase in which parameter begins to be estimated. A negative value causes the parameter to retain its INIT value (or value read from the ss3.par file).
8 Env var & Link Create a linkage to an input environmental time series
9 Dev link Invokes use of the deviation vector in the linkage function
10 Dev min yr Beginning year for the deviation vector
11 Dev max yr Ending year for the deviation vector
12 Dev phase Phase for estimation for elements in the deviation vector
13 Block Time block or trend to be applied
14 Block function Functional form for the block offset.

Note that relative to Stock Synthesis v.3.24, the order of PRIOR sd and PRIOR TYPE have been switched and the PRIOR TYPE options have been renumbered.

The full parameter line (14 in length) syntax for the mortality-growth, spawn-recruitment, catchability, and selectivity sections provides additional controls to give the parameter time-varying properties. If a parameter (a full parameter line of length 14) is set up to be time-varying (i.e., parameter time blocks, annual deviations), short parameter lines, the first 7 elements, are required to be specified immediately after the main parameter block (i.e., mortality-growth parameter section). Additional information regard time-varying parameters and how to implement them is in the Using Time-Varying Parameters section.

8.3 Terminology

The term COND appears in the “Typical Value” column of this documentation (it does not actually appear in the model files), it indicates that the following section is omitted except under certain conditions, or that the factors included in the following section depend upon certain conditions. In most cases, the description in the definition column is the same as the label output to the ss_new files.

8.4 Beginning of Control File Inputs

Typical Value Description and Options
#C comment Comments beginning with #C at the top of the file will be retained and included in output.
0 0 = Do not read the weight-at-age (wtatage.ss) file;
1 = Read the weight-at-age (wtatage.ss) file, also read and use the growth parameters; and
2 = Future option to read the weight-at-age (wtatage.ss) file, then omit reading and using growth parameters and all length-based data.
Additional information on the weight-at-age file and the expected formatting can be found in the Empirical Weight-at-Age section.
Number (N) of growth patterns (GP), also referred to as morphs:
These are collections of fish with unique biological characteristics (growth, mortality, weight-length, reproduction). The GP \(\times\) Sex \(\times\) Settlement Events constitute unique growth patterns that are tracked in SS3. They are assigned these characteristics at birth and retain them throughout their lifetime. At recruitment, growth pattern members are distributed across areas (if any) and they retain their biological characteristics even if they move to another area in which a different cohort with different biological characteristics might predominate. For example, one could assign a fast-growing growth pattern to recruit predominately in a southern area and a slow-growing growth pattern to a northern area. The natural mortality and growth parameters are specified for each growth pattern in the mortality-growth parameter section in the order of female growth pattern 1 to growth pattern N followed by male growth pattern 1 to growth pattern N in a two sex model.
1 Number of platoons within a growth pattern/morph:
This allows exploration of size-dependent survivorship. A value of 1 will not create additional platoons. Odd-numbered values (i.e., 3, 5) will break the overall morph into that number of platoons creating a smaller, larger, and mean growth platoon. The higher the number of platoons the slower the model will run, so values above 5 not advised. The fraction of each morph assigned to each platoon is custom-input or designated to be a normal approximation. When multiple platoons are designated, an additional input is the ratio of between platoon to within platoon variability in size-at-age. This is used to partition the total growth variability. For the platoons, their size-at-age is calculated as a factor (determined from the between-within variability calculation) times the size-at-age of the central morph which is determined from the growth parameters for that Growth Pattern \(\times\) Sex.
COND > 1 Following 2 lines are conditional on N platoons > 1.
0.7 Platoon within/between standard deviation ratio. Ratio of the amount of variability in length-at-age within platoons to between platoons so that a small ratio means that the platoons are narrower and more widely spaced. A parameter (after movement parameters) is needed if the within/between standard deviation ratio is negative.
0.2 0.6 0.2 Distribution among platoons. Enter either a custom vector or enter a vector of length N with the first value of -1 to get a normal approximation: (0.15, 0.70, 0.15) for 3 platoons, or 5 platoons (0.031, 0.237, 0.464, 0.237, 0.031).

8.4.1 Weight-at-Age

The capability to read empirical body weight-at-age for the population and each fleet was added starting in v.3.04, in lieu of generating these weights internally from the growth parameters, weight-at-length, and size-selectivity. The values are read from a separate file named, wtatage.ss. This file is only required to exist if this option is selected. See the Empirical Weight-at-Age section for additional information on file formatting for empirical weight-at-age.

8.4.2 Settlement Timing for Recruits and Distribution

In older versions of SS3 one value of spawning biomass was calculated annually at the beginning of one specified spawning season and this spawning biomass produced one annual total recruitment value. The annual recruitment value was then distributed among seasons, areas, and growth types according to other model parameters.

Additional control of the seasonal timing was added in v.3.30 and now there is an explicit elapsed time between spawning and recruitment. Spawning still occurs, just once per year, which defines a single spawning biomass for the stock-recruitment curve, but its timing can be at any specified time, not just the beginning of a season. Recruitment of the progeny from an annual spawning can now enter the population in one or more settlement events, at some point after spawning as defined by the user.

Typical Value Description and Options
1 Recruitment distribution method. This section controls which combinations of growth pattern \(\times\) area \(\times\) settlement will get a portion of the total recruitment coming from each spawning. Options:
1 = no longer available (used the Stock Synthesis v.3.24 or earlier setup);
2 = main effects for growth pattern, settle timing, and area;
3 = each settle entity; and
4 = none, no parameters (only if growth pattern \(\times\) settlement \(\times\) area = 1).
1 Spawner-Recruitment (not implement yet, but required), options:
1 = global; and
2 = by area (by area is not yet implemented; there is a conceptual challenge to doing the equilibrium calculation when there is fishing).
1 Number of recruitment settlement assignments. Must be at least 1 even if only 1 settlement and 1 area because the timing of that settlement must be specified.
0 Future feature, not implement yet but required.
Growth Pattern Month Area Age at settlement
1 5.5 1 0

The above example specifies settlement to mid-May (month 5.5). Note that normally the calendar age at settlement is 0 if settlement happens between the time of spawning and the end of that year, and at age 1 if settlement is in the year after spawning.

Below is an example setup where there are multiple settlement events, with one occurring the following year after spawning:

3 Number of recruitment settlement events
0 Unused option
Growth Pattern Month Area Age (for each settlement assignment)
1 11.0 1 0
1 12.0 1 0
1 1.0 1 1

Details regarding settlement of recruits and timing:

The distribution of recruitment among these settlement events is controlled by recruitment apportionment parameters. There must be a parameter line for each growth pattern, then for each area, then for each settlement. All of these are required, but only those growth pattern \(\times\) area \(\times\) settlements designated to receive recruits in the recruitment design matrix will have the parameter used in the recruitment distribution calculation. For the recruitment apportionment, the parameter values are the natural log of apportionment weight. The sum of all apportionment weights is calculated for each growth pattern \(\times\) area \(\times\) settlements that have been designated to receive recruits in the recruitment design matrix. Then the apportionment weights are scaled to sum to 1.0 so that the total recruitment from the spawning event is distributed among the cells designated to receive recruitment. Additionally, these distribution parameters can be time-varying, so the fraction of the recruits that occur in a particular growth pattern, area, or settlement can change from year to year. To specify annual variation in the distribution or recruits by area add a start and end year in the deviation min year and max year columns. Similar to the apportionment of recruits by area, one should be fixed while the other area(s) can deviate relative to the one area. If annual deviations are specified then two additional short parameter lines will be required to specify the standard error and the autocorrelation for each area with deviations.

8.4.2.1 Recruitment Distribution and Parameters


Recruits are apportioned according to:

\[\text{apportionment}_i = \frac{e^{p_i}}{\sum_{i=1}^{N}e^{p_i}}\]

where \(p_i\) is the proportion of recruits to area \(i\) and \(N\) is the number of settlement events. These parameters are defined in the mortality-growth parameter section.

Tips for fixing or estimating the recruitment apportionment:

In a seasonal model, all cohorts graduate to the age of 1 when they first reach January 1, even if the seasonal structure of the model has them being spawned in the late fall. In general, this means that the model operates under the assumption that all age data have been adjusted so that fish are age 0 at the time of spawning and all fish graduate to the next age on January 1. This can be problematic if the ageing structures deposit a ring at another time of year. Consequently, you may need to add or subtract a year to some of your age data to make it conform to the model expected data structure, or more ideally you may need to define the calendar year within the model to start at the beginning of the season at which ring deposition occurs. Talk with your ageing lab about their criteria for seasonal ring deposition.

Seasonal recruitment is coded to work smoothly with growth. If the recruitment occurring in each season is assigned the same growth pattern, then each seasonal cohort’s growth trajectory is simply shifted along the age/time axis. At the end of the year, the early born cohorts will be larger, but all are growing with the same growth parameters, so all will converge in size as they approach their common maximum length (e.g., no seasonal effects on growth).

At the time of settlement, fish are assigned a size equal to the lower edge of the first population size bin, and they grow linearly until they reach the age A1. A warning is generated if the first population length bin is greater than 10 cm as this seems an unreasonably large value for a larval fish. A1 is in terms of real age elapsed since birth. All fish advance to the next integer age on January 1, regardless of birth season. For example, consider a 2 season model with some recruitment in each season and with each season’s recruits coming from the same GP. At the end of the first year, the early born fish will be larger but both of the seasonal cohorts will advance to an integer age of 1 on Jan 1 of the next year. The full growth curve is still calculated below A1, but the size-at-age used is the linear replacement. Because the linear growth trajectory can never go negative, there is no need for the additive constant to the standard deviation (necessary for the growth model used in SS2 V1.x), but the option to add a constant has been retained in the model.

8.4.3 Movement

Here the movement of fish between areas are defined. This is a box transfer with no explicit adjacency of areas, so fish can move from any area to any other area in each time step. While not incorporated yet, there is a desire for future versions of SS3 to have the capability to allow sex-specific movement, and also to allow some sort of mirroring so that sexes and growth patterns can share the same movement parameters if desired.

Typical Value Description and Options
2 Enter Number of movement definitions.
1.0 First age that moves. This value is a real number, not an integer, to allow for an in-year start to movement in a multi-season model. It is the real age at the beginning of a season, even though movement does not occur until the end of the season. For example, in a setup with two 6-month seasons a value of 0.5 will cause the age 0 fish to not move when they complete their first 6-month season of life, and then to move at the end of their second season because they start movement capability when they reach the age of 0.5 years (6 months).
1 1 1 2 4 10 Movement definitions: season, growth pattern, source area, destination, age1, and age2. The example shown here has 1 growth patterns and 2 areas with fish moving between the two areas. The rate of movement will be controlled by the movement parameters later defined in the mortality-growth parameter section. Here the age1 and age2 specify the range over which the movement parameters are interpolated with movement constant below age1 and above age2.
1 2 2 1 4 10

Two parameters will be entered later for each growth pattern, area pair, and season.

8.4.4 Time Blocks

Typical Value Description and Options
3 Number of block patterns. These patterns can be referred to in the parameter sections to create a separate parameter value for each block.
COND > 0: Following inputs are omitted if the number of block patterns equals 0.
3 2 1 Blocks per pattern:
1975 1985 1986 1990 1995 2001 Beginning and ending years for blocks in design 1; years not assigned to a block period retain the baseline value for a parameter that uses this pattern.
1987 1990 1995 2001 Beginning and ending years for blocks in design 2.
1999 2002 Beginning and ending years for blocks in design 3.

Blocks and other time-vary parameter controls are operative during forecast years, so care should be taken when setting the end year of the last block in a pattern. If that end year is set to the last year in the time series, then the parameter will revert to the base value for the forecast. If the user wants to continue the last block through the forecast, it is advisable to set the last block’s end year value to -2 to cause SS3 to reset it to the last year of the forecast. Using the value -1 will set the block’s end year to the last year of the time series and leave the forecast at the base parameter value. Note that additional controls on time-varying parameters in forecast years are in the forecast section.

8.4.5 Auto-generation

Auto-generation is a useful way to automatically create the required short time-varying parameter lines which will be written in the control.ss_new file. These parameter lines can then be copied into the control file and modified as needed. As example, if you want to add a block to natural mortality, modify the block and block function entry of the mortality parameter line, ensure that auto-generation is set to 0 (for the biology section at least) and run the model without estimation. The control.ss_new file will now show the required block parameter line specification for natural mortality and this line can be copied into the main control file. Note, that if auto-generation is on (set to 0), the model will not expect to read the time-varying parameters in that section of the control file and will error out if they are present

Typical Value Description and Options
1 Environmental/Block/Deviation adjust method for all time-varying parameters.
1 = warning relative to base parameter bounds; and
3 = no bound check. Logistic bound check form from previous SS3 versions (e.g., v.3.24) is no longer an option.
1 1 1 1 1 Auto-generation of time-varying parameter lines. Five values control auto-generation for parameter block sections: 1-biology, 2-spawn-recruitment, 3-catchability, 4-tag (future), and 5-selectivity.
The accepted values are:
0 = auto-generate all time-varying parameters (no time-varying parameters are expected);
1 = read each time-varying parameter line as exists in the control file; and
2 = read each line and auto-generate if read the time-varying parameter value for LO = -12345. Useful to generate reasonable starting values.

8.5 Biology

8.5.1 Natural Mortality

Natural mortality (\(M\)) options include some options that are referenced to integer age and other options to real age since settlement. If using an option that references \(M\) to real age since settlement, \(M\) varies by age and will change by season (e.g., cohorts born early in the year will have different \(M\) than cohorts born later in the year).

8.5.1.1 Lorenzen Natural Mortality


Lorenzen natural mortality is based on the concept that natural mortality is driven by physiological and ecological processes and varies over the life cycle of a fish. So, natural mortality is scaled by the length of the fish. In this implementation, a reference age and \(M\) value are read in, and other ages will have an \(M\) scaled to its body size-at-age. However, if platoons are used, all will have the same \(M\) as their growth pattern. Lorenzen \(M\) calculation will be updated if the starting year growth parameters are active, but if growth parameters vary during the time series, the \(M\) is not further updated. Additionally, the \(M\) is linked to the length-at-age from the growth parameters and can’t be used for an empirical weight-at-age model. Be careful in using Lorenzen when there is time-varying growth.

8.5.1.2 Age-specific \(M\) Linked to Age-Specific Length and Maturity


This is an experimental option available as of v.3.30.17.

A general model for age- and sex-specific natural mortality expands a model developed by Mark N. Maunder et al. (2010) and Mark N. Maunder (2011) and is based on the following some assumptions:

  1. \(M\) for younger fish is due mainly to processes that are functions of the size of the individuals (e.g., predation);

  2. \(M\) increases after individuals become reproductively mature;

  3. Maturity follows a logistic curve; and

  4. \(M\) caused by senescence is either small or occurs at an age for which there are few fish alive, so it is not influential.

The model is based on combining the observation that \(M\) is inversely proportional to length for young fish (Kai Lorenzen 2000) and the logistic model from Lehodey, Senina, and Murtugudde (2008) for older fish. Natural mortality for a given sex and age is: \[M_{a,s} = M_{juv,s}\frac{L_{a,s}}{L_{mat*,s}}^{\lambda} + \frac{M_{mat,s}-M_{juv,s}\frac{L_{a,s}}{L_{mat*,s}}^{\lambda}}{1+e^{\beta_s(L_{a,s}- L_{50,s})}},\]

where \(M_{juv,s}\) (juvenile natural mortality), \(\lambda\) (power), \(L_{mat*,s}\) (first mature length of fish), and \(M_{mat,s}\) (the mature instantaneous natural mortality rate by sex, are user inputs in long parameter lines. For sub-option 1, \(L_{50}\) and \(\beta\) (slope) parameters taken from the maturity relationship within the model, which must use maturity-fecundity option 1. For sub-option 3, the \(L_{50,s}\) (the length at which 50% of fish are mature) and \(\beta\) (slope) parameters are specified in long parameter lines by the user.

Note that juvenile natural mortality, \(M_{juv,s}\), and first mature length of fish, \(L_{mat*,s}\), inputs are by sex (and growth pattern), but it is recommended to share them across sex by using the offset option. Using offset option 2 (males offset from females) causes male parameters to be an offset to the female parameters, so a parameter value of 0.0 for a male parameter will fix the parameter as same as the female parameter. Alternatively, using offset option 1 and setting males to 0.0 and not estimating the parameter fixes the parameter at the value of the female parameter (the section on fixing male parameters the same as female parameters has more details). This fulfills an additional assumption: \(M\) caused by reproduction may differ by sex, but juvenile \(M\) is independent of sex.

The length for a given age and sex, \(L_{a,s}\) is calculated within the model.

Some suggested defaults for user-provided parameter inputs are:

8.5.1.3 Age-range Lorenzen


The original implementation of Lorenzen natural mortality in Stock Synthesis uses a reference age and its associated natural mortality as inputs to determine the Lorenzen curve. However, sometimes this information is not known. The age-range Lorenzen instead uses a range of ages and the average natural mortality over them to calculate a Lorenzen natural mortality curve.

Like the original Lorenzen options, ages will have an \(M\) scaled to its body size-at-age and care should be taken when there are multiple growth patterns or time-varying growth.

8.5.1.4 Natural Mortality Options


Typical Value Description and Options
1 Natural Mortality Options:
0 = A single parameter;
1 = N breakpoints;
2 = Lorenzen;
3 = Read age specific \(M\) and do not do seasonal interpolation;
4 = Read age specific and do seasonal interpolation, if appropriate;
5 = Age-specific \(M\) linked to age-specific length and maturity (experimental);
6 = Age-range Lorenzen.
COND = 0 No additional natural mortality controls.
COND = 1
4 Number of breakpoints. Then read a vector of ages for these breakpoints. Later, per sex \(\times\) GP, read N parameters for the natural mortality at each breakpoint.
2.5 4.5 9.0 15.0 Vector of age breakpoints.
COND = 2
4 Reference age for Lorenzen natural mortality: read one additional integer value that is the reference age. Later read one long parameter line for each sex \(\times\) growth pattern that will be the \(M\) at the reference age.
COND = 3 or 4 Do not read any natural mortality parameters in the mortality growth parameter section. With option 3, these \(M\) values are held fixed for the integer age (no seasonality or birth season considerations). With option 4, there is seasonal interpolation based on real age, just as in options 1 and 2.
0.20 0.25...0.20 0.23... Age-specific \(M\) values where in a 2 sex model the first row is female and the second row is male. If there are multiple growth patterns female growth pattern 1-N is read first followed by males 1-N growth pattern.
COND = 5 Age-specific \(M\) linked to age-specific length and maturity sub-options.
1 = Requires 4 long parameter lines per sex \(\times\) growth pattern using maturity. Must be used with maturity option 1;
2 = Reserved for future option;
3 = Requires 6 long parameter lines per sex \(\times\) growth pattern
COND = 6 Read two additional integer values that are the age range for average \(M\). Later, read one long parameter line for each sex \(\times\) growth pattern that will be the average \(M\) over the reference age range.
0 Minimum age of average \(M\) range for calculating Lorenzen natural mortality.
10 Maximum age of average \(M\) range for calculating Lorenzen natural mortality.

8.5.2 Growth

8.5.2.1 Timing


When fish recruit at the real age of 0.0 at settlement, they have body size equal to the lower edge of the first population size bin. The fish then grow linearly until they reach a real age equal to the input value “growth-at-age for L1” and have a size equal to the parameter value for L1 (the minimum length parameter). As they age further, they grow according the selected growth equation. The growth curve is calibrated to go through the size L2 parameter when they reach the age of maximum length.

8.5.2.2 Maximum Length (Linf)


If “Growth at age for L2” is set equal to 999, then the size at the L2 parameter is used as Linf.

8.5.2.3 von Bertalanffy growth function


The von Bertalanffy growth curve is parameterized as:

\[L_t = L_\infty + (L_{1}-L_\infty)e^{-k(a-A_{1})}\]

with parameters \(L_{1}\), \(L_\infty\), and \(k\). The \(L_\infty\) is calculated as:

\[L_\infty = L_{1} + \frac{(L_2 - L_1)}{1-e^{-k(A2-A1)}}\]

based on the input values of fixed age for first size-at-age (\(A_1\)) and fixed age for second size-at-age (\(A_2\)).

8.5.2.4 Schnute/Richards growth function


The Richards (1959) growth model as parameterized by Schnute (1981) provides a flexible growth parameterization that allows for not only asymptotic growth but also linear, quadratic or exponential growth. The Schnute/Richards growth is invoked by entering option 2 in the growth type field. The Schnute/Richards growth function uses the standard growth parameters (e.g., Lmin, Linf, and \(k\)) and a fourth parameter that is read after reading the von Bertalanffy growth coefficient parameter (\(k\)). When this fourth parameter has a value of 1.0, it is equivalent to the standard von Bertalanffy growth curve. When this function was first introduced, it was required that A0 parameter be set to 0.0.

The Schnute/Richards growth model is parameterized as:

\[L_t = L_{MIN}^b + (L_\infty^b-L_{MIN}^b)\frac{1-e^{-k(t-A_{1})}}{1-e^{-k(A_2-A_1)}}^{1/b}\]

with parameters \(L_{MIN}\), \(L_{MAX}\), \(k\), and \(b\).

The Richards model has \(b\) < 0, the von Bertalanffy model has \(b\) = 1. The general case of \(b\) > 0 was called the “generalized von Bertalanffy” by Schnute (1981). The Gompertz model has \(b\) = 0, where the equation is undefined as written above and must be replaced with:

\[L_t = y_1e\Big[ln(y_2/y-1)\frac{1-e^{-k(t-A_1)}}{1-e{-k(A_2-A_1)}}\Big]\]

Thus, if \(b\) will be estimated as a free parameter, it might be necessary to include options for constraining it to different ranges.

8.5.2.5 Mean size-at-maximum age


The mean size of fish in the max age bin depends upon how close the growth curve is to Linf by the time it reaches max age and the mortality rate of fish after they reach max age. Users specify the mortality rate to use in this calculation during the initial equilibrium year. This must be specified by the user and should be reasonably close to \(M\) plus initial \(F\). In v.3.30, this uses the von Bertalanffy growth out to 3 times the maximum population age and decays the numbers at age by exp(-value set here). For subsequent years of the time series, the model should update the size-at-maximum age according to the weighted average mean size of fish already at maximum age and the size of fish just graduating into maximum age. Unfortunately, this updating is only happening in years with time-varying growth. This will hopefully be fixed in the future version.

8.5.2.6 Age-specific K


This option creates age-specific K multipliers for each age of a user-specified age range, with independent multiplicative factors for each age in the range and for each growth pattern/sex. The null value is 1.0 and each age’s K is set to the next earlier age’s K times the value of the current age’s multiplier. Each of these multipliers is entered as a full parameter line, so inherits all time-varying capabilities of full parameters. The lower end of this age range cannot extend younger than the specified age for which the first growth parameter applies. This is a beta model feature, so examine output closely to assure you are getting the size-at-age pattern you expect. Beware of using this option in a model with seasons within year because the K deviations are indexed solely by integer age according to birth year. There is no offset for birth season timing effects, nor is there any seasonal interpolation of the age-varying K.

8.5.2.7 Growth cessation


A growth cessation model was developed for the application to tropical tuna species (Mark N. Maunder et al. 2018). Growth cessation allows for a linear relationship between length and age, followed by a marked reduction of growth after the onset of sexual maturity by assuming linear growth for the youngest individuals and then a logistic function to model the decreasing growth rate at older ages.

Example growth specifications:
Typical Value Description and Options
1 Growth Model:
1 = von Bertalanffy (3 parameters);
2 = Schnute’s generalized growth curve (aka Richards curve) with 3 parameters. Third parameter has null value of 1.0;
3 = von Bertalanffy with age-specific K multipliers for specified range of ages, requires additional inputs below following the placeholder for future growth feature;
4 = age-specific K. Set base K as K for age = N ages and working backwards and the age-specific K = K for the next older age * multiplier, requires additional inputs below following the placeholder for future growth feature;
5 = age specific K. Set base K as K for N ages and work backwards and the age-specific K = base K * multiplier, requires additional inputs below following the placeholder for future growth feature;
6 = not implemented;
7 = not implemented; and
8 = growth cessation. Decreases the K for older fish. If implemented, the Amin and Amax parameters, the next two lines, need to be set at 0 and 999 respectively. The mortality-growth parameter section requires the base K parameter line which is interpreted as the steepness of the logistic function that models the reduction in the growth increment by age followed by a second parameter line which is the parameter related to the maximum growth rate.
1 Growth Amin (A1): Reference age for first size-at-age L1 (post-settlement) parameter. First growth parameter is size at this age; linear growth below this.
25 Growth Amax (A2): Reference age for second size-at-age L2 (post-settlement) parameter. Use 999 to treat as L infinity.
0.20 Exponential decay for growth above maximum age (plus group: fixed at 0.20 in v.3.24; should approximate initial Z). Alternative Options:
-998 = Disable growth above maximum age (plus group) similar to earlier versions of SS3 (prior to v.3.24); and
-999 = Replicate the simpler calculation done in v.3.24.
0 Placeholder for a future growth feature.
COND = 3 Growth model: age-specific K age-specific K where the age-specific K parameter values are multipliers of the age - 1 K parameter value. For example, if the base parameter is 0.20 based on the example setup the K parameter for age 5 is equal to 0.20 * age-5 multiplier. Subsequently, age 6 K value is equal to age 5 K (0.20 * age-5 multiplier) multiplied by the age-6 multiplier. All ages above the maximum age with age-specific K are equal to the maximum age-specific K. The age specific K values are available in the Report file in the AGE_SPECIFIC_K section.
3 Number of K multipliers to read;
5 Minimum age for age-specific K; and
6 Second age for age-specific K; and
7 Maximum age for age-specific K.
COND = 4 Growth model: age-specific K where the age-specific K parameter values are multipliers of the age + 1 K parameter value. For example, if the base parameter is 0.20 based on the example setup the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal to age 7 K (0.20 * age-7 multiplier) multiplied by the age-6 multiplier. All ages below the minimum age with age-specific K are equal to the minimum age-specific K. The age specific K values are available in the Report file in the AGE_SPECIFIC_K section.
3 Number of K multipliers to read;
7 Maximum age for age-specific K;
6 Second age for age-specific K; and
5 Minimum age for age-specific K.
COND = 5 Growth model: age-specific K where the age-specific K parameter values are multipliers of the base K parameter value. For example, if the base parameter is 0.20 based on the example setup the K parameter for age 7 is equal to 0.20 * age-7 multiplier. Subsequently, age 6 K value is equal 0.20 * age-6 multiplier. The age specific K values are available in the Report file in the AGE_SPECIFIC_K section.
3 Number of K multipliers to read;
7 Maximum age for age-specific K;
6 Second age for age-specific K; and
5 Minimum age for age-specific K.
0 Standard deviation added to length-at-age: Enter 0.10 to mimic SS2 V1.xx. Recommend using a value of 0.0.
1 cv Pattern (cannot be time-varying)
0: CV=f(LAA), so the 2 parameters are in terms of cv of the distribution of length-at-age (LAA) and the interpolation between these 2 parameters is a function of mean length-at-age;
1: CV=f(A), so interpolation is a function of age (A);
2: SD=f(LAA), so parameters define the sd of length-at-age and interpolation is a function of mean length-at-age;
3: SD=f(A); and
4: Log-normal distribution of size-at-age. Input parameters will specify the sd of natural log size-at-age (e.g., entered values will typically be between 0.05 and 0.15). A bias adjustment is applied so the log-normal distribution of size-at-age will have the same mean size as when a normal distribution is used.

8.5.3 Maturity-Fecundity

Typical Value Description and Options
2 Maturity Option:
1 = length logistic;
2 = age logistic;
3 = read maturity-at-age for each female growth pattern;
4 = read a fecundity “x” maturity-at-age vector for all ages;
5 = disabled; and
6 = read vector of length-based maturity values.
Note: need to read 2 parameter lines (maturity at 50% and maturity slope) even if option 3 or 4 is selected.
COND = 3 or 4 Maturity Option
0 0.05 0.10... Vector of age-specific maturity or fecundity. One row of length N ages + 1 based on the maximum population age for each female growth pattern.
COND = 6 Maturity Option
0 0.05 0.10... Vector of length-specific maturity or fecundity, based on the population length bins. One row of length equal to the number of population length bins (defined in the data file) for each female growth pattern.
1 First Mature Age: all ages below the first mature age will have maturity set to zero. This value is overridden if maturity option is 3 or 4 or if empirical weight-at-age (wtatage.ss) is used, but still must exist here.
1 Fecundity Option (irrelevant if maturity option is 4 or wtatage.ss is used):
1 = to interpret the 2 egg parameters as linear eggs/kg on body weight (current default), so fecundity = \(wt*(a+b*wt)\), so value of a=1, b=0 causes eggs to be equivalent to spawning biomass;
2 = to set fecundity= \(a*L^b\);
3 = to set fecundity= \(a*W^b\), so values of a=1, b=1 causes fecundity to be equivalent to spawning biomass;
4 = fecundity = \(a+b*L\); and
5 = eggs = \(a+b*wt\).

8.5.4 Hermaphroditism

Sequential hermaphroditism can be modeled in Stock Synthesis by having recruits be all female or all male and then defining a 3-parameter function which represents the annual age-specific probability of transition to the other sex. In a seasonal model, the transition can occur after each season or just once per year. There are also settings to control the first age which transitions and the contribution of males to the spawning biomass calculations.

The fraction female parameter should be configured to reflect the fraction of age 0 fish of each sex. Values of 0 or 1 may lead to NaN likelihoods, so we recommend changing values that would be fixed at 0 or 1 to 0.000001 or 0.999999.

Typical Value Description and Options
0 Hermaphroditism Option:
0 = not used;
1 = invoke female-to-male age-specific function; and
-1 = invoke male-to-female age-specific function.
COND = 1 or
COND = -1 Read 2 lines below if hermaphroditism is selected. Also read 3 parameters after reading the male weight-length parameters.
-1.2 Hermaphroditism Season / First Age:
-1 to do transition at the end of each season (after mortality and before movement); or
<positive integer> to select just one season.
If fractional part included (optional), indicates first age that transitions (otherwise, age 1 assumed).
0.5 Fraction of males to include in spawning biomass;
0 = no males in spawning biomass;
fraction of male biomass to include in spawning biomass; and
1 = simple addition of males to females.

The hermaphroditism option requires three full parameter lines in the mortality and growth section. These parameters control a cumulative normal distribution function as follows:

  1. The inflection age where the transition rate is halfway from 0 to its asymptote.

  2. The standard deviation of the cumulative normal (such that about 95% of the increase in transition rate occurs within 2 standard deviations of the inflection point).

  3. The asymptotic transition rate (the highest proportion that will transition to the other sex in a time step).

These parameter lines are entered directly after the weight-at-length parameters for males.

8.5.5 Natural Mortality and Growth Parameter Offset Method

The most common setup for natural mortality and growth parameters for two-sex models is direct assignment (option 1) which allows for independent estimation (or fixing) of natural mortality and growth parameters by sex. Within the direct assignment option there is functionality to set male parameters equal to the corresponding female parameter if the male INIT value is set to 0 and the phase is negative.

Alternatively, there may be situations where a user wants to create direct linkages between natural mortality and growth parameters between sexes (options 2 or 3). If the parameter offset option 2 is selected, the control file still requires that all male natural mortality and growth parameters lines to be included. The natural mortality and growth parameters (e.g., k, Lmin, Lmax, CV1, CV2) for sex > 1 (typically male fish) have a value that is an exponential offset to the female natural mortality and growth parameters, e.g., \(M_{\text{male}} = M_{\text{female}}*exp(M_{\text{male offset}})\). An offset parameter can be fixed at 0.0, at a non-zero value, or estimated.

Parameter offset option 3 has an offset feature for the growth cv and for natural mortality. For the growth cv, the parameter for cv at old age is an exponential offset from the parameter for cv at young age, e.g., \(CV_{\text{old}} = CV_{\text{young}}*exp(CV_{\text{offset}})\). This allows for cv old to track an estimated cv young parameter. For natural mortality, if there is more than 1 natural mortality parameter, then parameters 2 and higher for the same sex and growth pattern are exponential offsets from the first natural mortality parameter. Note that it is an old feature designed to work with natural mortality option 1 (breakpoints). It may work with natural mortality options 3 and 4, but this has not been tested.

Typical Value Description and Options
1 Parameter Offset Method:
1 = direct assignment;
2 = for each growth pattern by sex, parameter defines offset from sex 1, offsets are in exponential terms, so for example: \(M_{\text{old male}} = M_{\text{old female}}*exp(M_{\text{old male}})\); and
3 = for each growth pattern by sex, parameter defines offset from growth pattern 1 sex 1. For females, given that “natM option” is breakpoint and there are two breakpoints, parameter defines offset from early age (e.g., \(M_{\text{old female}} = M_{\text{young female}}*exp(M_{\text{old female}}\)). For males, given that “natM option” is breakpoint and there are two breakpoints, parameter is defined as offset from females AND from early age (e.g., \(M_{\text{old male}} = M_{\text{young female}}*exp(M_{\text{young male}})*exp(M_{\text{old male}})\)).

8.5.6 Catch Multiplier

These parameter lines are only included in the control file if the catch multiplier field in the data file is set to 1 for a fleet. The model expected catch \(C_{exp}\) by fleet is estimated by:

\[C_{exp} = \frac{C_{obs}}{c_{mult}}\]

where \(C_{obs}\) is the input catch by fleet (observed catch) within the data file and \(c_{mult}\) is the estimated (or fixed) catch multiplier. It has year-specific, not season-specific, time-varying capabilities. In the catch likelihood calculation, expected catch is multiplied by the catch multiplier by year and fishery to get \(C_{obs}\) before being compared to the observed retained catch as modified by the \(c_{mult}\).

8.5.7 Ageing Error Parameters

These parameters are only included in the control file if one of the ageing error definitions in the data file has requested this feature (by putting a negative value for the ageing error of the age zero fish of one ageing error definition). As of v.3.30.12, these parameters now have time-varying capability. Seven additional full parameter lines are required. The parameter lines specify:

  1. Age at which the estimated pattern begins (just linear below this age), this is the start age.

  2. Bias at start age (as additive offset from unbiased age).

  3. Bias at maximum (as additive offset from unbiased age).

  4. Power function coefficient for interpolating between those 2 values (value of 0.0 produces linear interpolation in the bias).

  5. Standard deviation at start age.

  6. Standard deviation at max age.

  7. Power function coefficient for interpolating between those 2 values.

Code for implementing vectors of mean age and standard deviation of age can be located online within the SS_miscfxn.tpl file, search for function “get_age_age” or “SS_Label_Function 45”.

8.5.8 Sex Ratio

The last line in the mortality-growth parameter section allows the user to fix or estimate the sex ratio between female and male fish at recruitment. The parameter is specified in the fraction of female fish and is applied at settlement. The default option is a sex ratio of 0.50 with this parameter not being estimated. Any composition data input as type = 3, both sexes, will be informative to the sex ratio because it scales females and males together, not separately, for this data type input. Estimation of the sex ratio is a new feature and should be done with care with the user checking that the answer is reflective of the data.

As of v.3.30.12, this parameter now has time-varying capability similar to other parameters in the mortality-growth section.

8.5.9 Predator Fleet Mortality

The ability to define a predator fleet was first implemented in v.3.30.18. A parameter line for predator mortality is only required if a predator fleet has been defined in the data file. For each fleet that is designated as a predator, a new parameter line is created in the mg section in the control file. This parameter will have the label \(M2\text{\_pred1}\), where the “1” is the index for the predator (not the index of the fleet being used as a predator). More than one predator can be included. If the model has > 1 season, it is normal to expect \(M2\) to vary seasonally. Therefore, only if the number of seasons is greater than 1, follow each \(M2\) parameter with number of season parameters to provide the seasonal multipliers. These are simple multipliers times \(M2\), so at least one of these needs to have a non-estimated value. The set of multipliers can be used to set \(M2\) to only operate in one season if desired. If there is more than one predator fleet, each will have its own seasonal multipliers. If there is only 1 season in the model, then no multiplier lines are included.

\(M2\) is age-specific, but not sex or morph specific. The value of the \(M2\) parameter will be distributed across ages according to the selectivity for this fleet. In this example note that “pred1” refers to the first predator in the model, note the fleet number in which that predator has been 3 configured. The resultant age-specific \(M2\) is added to the base \(M\) to create a total age-specific \(M\) that operates in the model exactly as \(M\) has always operated.

Because \(M2\) is a mg, it can be time-varying like any other mg. This is important because \(M2\), as a component added to base \(M\), will probably always need to be time-varying by blocks, random walk or linkage to external driver. A time series of \(M2\) from an external source could be input by setting the \(M2\) parameter to have a base value of 0.0 and linking to the time series in the environmental data section of the data file using an additive link. In addition, the relationship should have a fixed slope of 1.0 such that \(M2(y) = 0.0 + 1.0 * M2\text{\_env\_input}(y)\).

Note that all existing reports of natural mortality are the total (base \(M\) + \(M2\)) natural mortality. The \(M2\) parameter is active in the virgin year and initial equilibrium year, where the value of \(M2\) in the start year is used. In the future, separate control of \(M2\) for the initial equilibrium will be provided. \(M2\) is part of the total \(M\) used in the spr and msy benchmark calculations. \(M2\) is active in the forecast era, so be attentive to its configuration if it is time-varying. Testing to date shows that this \(M2\) feature can replicate previous results using bycatch fleets.

Note that predator calculations probably will fail if tried with \(F\) Method = 1 (Pope’s), even though the Pope calculation is built into the hybrid \(F\) approach.

8.5.10 Read Biology Parameters

Next, the model reads the mg parameters in generally the following order (may vary based on selected options):

Parameter Description
Parameter Description
Female natural mortality and growth parameters in the following order by growth pattern.
\(M\) Natural mortality for female growth pattern 1, where the number of natural mortality parameters depends on the option selected.
COND if \(M\) option = 1
N breakpoints N-1 parameter lines as an exponential offsets from the previous reference age.
Lmin Length at Amin (units in cm) for female, growth pattern 1.
Lmax Length at Amax (units in cm) for female, growth pattern 1.
VBK von Bertalanffy growth coefficient (units are per year) for females, growth pattern 1.
COND if growth type = 2
Richards Coefficient Only include this parameter if Richards growth function is used. If included, a parameter value of 1.0 will have a null effect and produce a growth curve identical to von Bertalanffy.
COND if growth type >= 3 Age-Specific K
N parameter lines equal to the number K deviations for the ages specified above.
cv young Variability for size at age \(<=\) Amin for females, growth pattern 1. Note that cv cannot vary over time, so do not set up env-link or a deviation vector. Also, units are either as cv or as standard deviation, depending on assigned value of cv pattern.
cv old Variability for size at age \(>=\) Amax for females, growth pattern 1. For intermediate ages, do a linear interpolation of cv on means size-at-age. Note that the units for cv will depend on the cv pattern and the value of mortality-growth parameter as offset. The cv value cannot vary over time.
WtLen scale Coefficient to convert length in cm to weight in kg for females.
WtLen exp Exponent in to convert length to weight for females.
Mat-50% Maturity logistic inflection (in cm or years) where female maturity-at-length (or age) is a logistic function: \(M_{l} = 1/(1+exp(\alpha*(l_{a} - \beta)))\). The \(\alpha\) is the slope, \(l_{a}\) is the size-at-age, and \(\beta\) is the inflection of the maturity curve. Value ignored for maturity option 3, 4, and 6.
Mat-slope Logistic slope (must have negative value). Value ignored for maturity option 3, 4, and 6.
Eggs-alpha Two fecundity parameters; usage depends on the selected fecundity option. Must be included here even if vector is read in the control section above.
Eggs-beta
COND: growth pattern > 1 Repeat female parameters in the above order for growth pattern 2.
Males Male natural mortality and growth parameters in the following order by growth pattern.
\(M\) Natural mortality for male GP1, where the number of natural mortality parameters depends on the option selected.
COND if \(M\) option = 1
N breakpoints N-1 parameter lines as an exponential offsets from the previous reference age.
Lmin Length at Amin (units in cm) for male, growth pattern 1. In a two sex model, fixing the INIT value a 0 will assume the same Lmin as the female parameter value.
Lmax Length at Amax (units in cm) for male, growth pattern 1. In a two sex model, fixing the INIT value a 0 will assume the same Lmax as the female parameter value.
VBK von Bertalanffy growth coefficient (units are per year) for males, growth pattern 1. In a two sex model, fixing the INIT value a 0 will assume the same k as the female parameter value.
COND if growth type = 2
Richards Coefficient Only include this parameter if Richards growth function is used. If included, a parameter value of 1.0 will have a null effect and produce a growth curve identical to Bertalanffy.
COND if growth type = 3 Age-Specific K
N parameter lines equal to the number K deviations for the ages specified above.
cv young Variability for size at age \(<=\) Amin for males, GP1. Note that cv cannot vary over time, so do not set up env-link or a deviation vector. Also, units are either as cv or as standard deviation, depending on assigned value of cv pattern.
cv old Variability for size at age \(>=\) Amax for males, growth pattern 1. For intermediate ages, do a linear interpolation of cv on means size-at-age. Note that the units for cv will depend on the cv pattern and the value of mortality-growth parameters as offset.
WtLen scale Coefficient to convert length in cm to weight in kg for males.
WtLen exp Exponent to convert length to weight for males.
COND: growth pattern > 1 Repeat male parameters in the above order for growth pattern 2.

COND: Hermaphroditism

3 parameters lines define a cumulative normal distribution for the transition rate of females to males (or vice versa). For more detail on these parameters, see the description Hermaphroditism controls above.
Inflect Age Hermaphrodite inflection age where the transition rate is halfway from 0 to its asymptote.
sd Hermaphrodite standard deviation of the cumulative normal.
Asmp Rate Hermaphrodite asymptotic transition rate.
COND: Recruitment Distribution 3 parameters lines defining recruitment distribution. See Recruitment Distribution and Parameters for more details about recruitment apportionment parameterization.
Method = 2
Recruitment Dist. GP Recruitment apportionment by growth pattern, if multiple growth patterns, multiple entries required.
Recruitment Dist. Area Recruitment apportionment by area, if multiple areas, multiple entries required.
Recruitment Dist. Month Recruitment apportionment by month, if multiple months, multiple entries required.
COND: Recruitment Distribution 1 parameter line for each settlement event defining the distribution of recruitment among them. See Recruitment Distribution and Parameters for more details about recruitment apportionment parameterization.
Method = 3
Recruitment Dist. 1 Recruitment apportionment parameter for the 1st settlement event.
Recruitment Dist. 2 Recruitment apportionment parameter for the 2nd settlement event.
Cohort growth deviation Set equal to 1.0 and do not estimate; it is deviations from this base that matter.
2 \(\times\) N selected movement pairs Movement parameters
COND: The following lines are only required when the associated features are turned on.
Platoon StDev Ratio
Ageing Error Turned on in the data file.
Catch Multiplier For each fleet selected for this option in the data file.
Fraction female female at the time of recruitment by growth pattern, if multiple growth patterns, multiple entries required.
COND: The following lines are only required when predator fleets are invoked.
\(M2\) Predator Turned on in the data file.

Example format for mortality-growth parameter section with 2 sexes, 2 areas. Parameters marked with COND are conditional on selecting that feature:

Prior <other Block
LO HI INIT Value entries> Fxn Parameter Label
Prior <other Block
0 0.50 0.15 0.1 ... 0 #NatM_p_1_Fem_GP_1
0 45 21 36 ... 0 #L_at_Amin_Fem_GP_1
40 90 70 70 ... 0 #L_at_Amax_Fem_GP_1
0 0.25 0.15 0.10 ... 0 #VonBert_K_Fem_GP_1
0.10 0.25 0.15 0.20 ... 0 #CV_young_Fem_GP_1
0.10 0.25 0.15 0.20 ... 0 #CV_old_Fem_GP_1
-3 3 2e-6 0 ... 0 #Wtlen_1_Fem
-3 4 3 3 ... 0 #Wtlen_2_Fem
50 60 55 55 ... 0 #Mat50%_Fem
-3 3 -0.2 -0.2 ... 0 #Mat_slope_Fem
-5 5 0 0 ... 0 #Eggs/kg_inter_Fem
-50 5 0 0 ... 0 #Eggs/kg_slope_wt_Fem
0 0.50 0.15 0.1 ... 0 #NatM_p_1_Mal_GP_1
0 45 21 36 ... 0 #L_at_Amin_Mal_GP_1
40 90 70 70 ... 0 #L_at_Amax_Mal_GP_1
0 0.25 0.15 0.10 ... 0 #VonBert_K_Mal_GP_1
0.10 0.25 0.15 0.20 ... 0 #CV_young_Mal_GP_1
0.10 0.25 0.15 0.20 ... 0 #CV_old_Mal_GP_1
-3 3 2e-6 0 ... 0 #Wtlen_1_Mal
-3 4 3 3 ... 0 #Wtlen_2_Mal
0 0 0 0 ... 0 #RecrDist_GP_1
0 0 0 0 ... 0 #RecrDist_Area_1
0 0 0 0 ... 0 #RecrDist_Area_2
0 0 0 0 ... 0 #RecrDist_Settlement_1
0.2 5 1 1 ... 0 #Cohort_Grow_Dev
-5 5 -4 1 ... 0 #Move_A_seas1_GP1_from_1to2 (COND)
-5 5 -4 1 ... 0 #Move_B_seas1_GP1_from_1to2 (COND)
-99 99 1 0 ... 0 #AgeKeyParm1 (COND)
-99 99 0.288 0 ... 0 #Age_Key_Parms 2 to 5 (COND)
-99 99 0.715 0 ... 0 #Age_Key_Parm6 (COND)
0.2 3.0 1.0 0 ... 0 #Catch_mult_fleet1 (COND)
0.001 0.999 0.5 0.5 ... 0 #Frac_Female_GP_1
-1.0 2 0 0 ... 0 #PredM2_4

8.5.10.1 Setting Male Parameters Equal to Females


The model allows a short-cut for males to use the same parameter values as female fish for natural mortality, length minimum (Length at Amin), maximum length (Length at Amax), coefficient at younger ages (CV1), coefficient at older ages (CV2), and the growth coefficient (K) when using offset option = 1 (the offset section has information on the options available). If the INIT parameter value for males is set equal to 0.0 and the phase set to negative, not estimated, each of these male parameters will use the corresponding female parameter value for the males.

8.5.11 Time-varying Parameters

Please see the Time-Varying Parameter Specification and Setup section for details on how to set up time varying parameters. In short, additional short parameter lines will be needed after the long parameter lines. There are some additional considerations for time-varying growth.

8.5.12 Seasonal Biology Parameters

Seasonal effects are available for weight-length parameters, maturity, fecundity, and for the growth parameter K. The seasonal parameter values adjust the base parameter value for that season. \[P'=P*exp(\text{seas\_value})\]

Control file continued:
Value Description
Seasonality for selected biology parameters (not a conditional input). Read 10 integers to specify which biology parameters have seasonality: female-wtlen1, female-wtlen2, maturity1, maturity2, fecundity1, fecundity2, male-wtlen1, male-wtlen2, L1, K. Reading a positive value selects that factor for seasonality.
COND: If any factors have seasonality, then read N seasons parameters that define the
seasonal offsets from the base parameter value.
<short parameter line (s)> Read N seasons short parameter lines for each factor selected for seasonality. The parameter values define an exponential offset from the base parameter value.

8.6 Spawner-Recruitment

The spawner-recruitment section starts by specification of the functional relationship that will be used.

Control file continued:
Value Label Description
3 Spawner- The options are:
Recruitment 2: Ricker: 2 parameters: \(ln(R_{0})\) and steepness (h);
Relationship 3: Standard Beverton-Holt, 2 parameters: \(ln(R_{0})\) and steepness;
4: Ignore steepness and no bias adjustment. Use this in conjunction with very low emphasis on recruitment deviations to get CAGEAN-like unconstrained recruitment estimates, 2 parameters, but only uses the first one;
5: Hockey stick:3 parameters: \(ln(R_{0})\), steepness, and \(R_{\text{min}}\)) for \(ln(R_{0})\), fraction of virgin ssb at which inflection occurs, and the R level at ssb = 0.0;
6: Beverton-Holt with flat-top beyond \(B_{0}\), 2 parameters: \(ln(R_{0})\) and steepness;
7: Survivorship function: 3 parameters: \(ln(R_{0})\), \(z_{frac}\), and \(\beta\), suitable for sharks and low fecundity stocks to assure recruits are \(<=\) population production;
8: Shepherd re-parameterization: 3 parameters: \(ln(R_{0})\), steepness, and shape parameter, \(c\) (added to v.3.30.11 and is in beta mode); and
9: Ricker re-parameterization: 3 parameters: \(ln(R_{0})\), steepness, and Ricker power, \(\gamma\) (added to v.3.30.11 and is in beta mode).
1 Equilibrium recruitment Use steepness in initial equilibrium recruitment calculation
0 = none; and
1 = use steepness (h).
0 Future Feature Reserved for the future option to make realized \(\sigma_R\) a function of the stock-recruitment curve.

8.6.0.1 Equilibrium Recruitment


In principle, steepness should always be used when calculating equilibrium recruitment. This was not the default in early versions of Stock Synthesis, so has not come into common practice. The original logic, from early 1990s version of Stock Synthesis for long-lived U.S. west coast groundfish, was that fishing had not yet gone on long enough to have reduced spawning biomass enough to reduce expected recruitment noticeably for the chosen initial equilibrium year.

Steepness should be used in the equilibrium calculation whenever you believe that the initial equilibrium catch was large enough to have reduced expected recruitment below \(R_{0}\). Note that when using this option, the initial equilibrium catch cannot be greater than msy. SS3 uses the identical code for initial equilibrium catch and for msy calculation, so it is axiomatic that msy will be \(<=\) initial equilibrium catch. So, SS3 will estimate a \(R_{0}\) large enough to make msy < initial equilibrium catch. Alternatively, you can elect to not use this option and instead add many years to the beginning of the time series with that same level of initial catch. In this case, it is not equilibrium, so the catch can reduce the population below \(B_{MSY}\).

8.6.1 Spawner-Recruitment Functions

The number of age-0 fish is related to spawning biomass according to a stock-recruitment relationship. There are a number of options for the shape of the spawner-recruitment relationship: Beverton-Holt, Ricker, Hockey-Stick, and a survival-based stock recruitment relationship.

8.6.1.1 Beverton-Holt


The Beverton-Holt Stock Recruitment curve is calculated as: \[{R_y = \frac{4hR_0SB_y}{SB_0(1-h)+SB_y(5h-1)}e^{-0.5b_y\sigma^2_R+\tilde{R}_y}\qquad \tilde{R}_y\sim N(0;\sigma^2_R)}\]

where \(R_0\) is the unfished equilibrium recruitment, \(SB_0\) is the unfished equilibrium spawning biomass (corresponding to \(R_0\)), \(SB_y\) is the spawning biomass at the start of the spawning season during year \(y\), \(h\) is the steepness parameter, \(b_y\) is the bias adjustment fraction applied during year \(y\), is the standard deviation among recruitment deviations in natural log space, and is the log-normal recruitment deviation for year \(y\). The bias-adjustment factor (Methot and Taylor 2011) ensures unbiased estimation of mean recruitment even during data-poor eras in which the maximum likelihood estimate of the recruitment deviation is near 0.0.

8.6.1.2 Ricker


The Ricker Stock Recruitment curve is calculated as: \[{R_y = \frac{R_0SB_y}{SB_0}e^{h(1-SB_y/SB_0)}e^{-0.5b_y\sigma^2_R+\tilde{R}_y}\qquad \tilde{R}_y\sim N(0;\sigma^2_R)}\]

where the stock recruitment parameters have the same meaning as described above for the Beverton-Holt.

8.6.1.3 Hockey-Stick


The hockey-stick recruitment curve is calculated as: \[{R_y = join(R_{\text{min}}R_0+R_0\frac{SB_y}{hSB_0}(1-R_{\text{min}}))+R_0(1-join)e^{-0.5b_y\sigma^2_R+\tilde{R}_y}\qquad \tilde{R}_y\sim N(0;\sigma^2_R)}\] where \(R_{\text{min}}\) is the minimum recruitment level predicted at a spawning size of zero and is set by the user in the control file, \(h\) is defined as the fraction of \(SB_0\) below which recruitment declines linearly, and \(join\) is defined as: \[{ join = \bigg[1+e^{1000*\frac{(SB_0-hSB_0)}{SB_0}}\bigg]^{-1} }\]

8.6.1.4 Survivorship


The survivorship stock recruitment relationship based on Taylor et al. (2013) is a stock-recruitment model that enables explicit modeling of survival between embryos and age 0 recruits, and allows the description of a wide range of pre-recruit survival curves. The model is especially useful for low fecundity species that produce relatively few offspring per litter and exhibit a more direct connection between spawning output and recruitment than species generating millions of eggs.

Survival-based recruitment is constrained so that the recruitment rate cannot exceed fecundity. The relationship between survival and spawning output is based on parameters which are on a natural log scale. These are: \[z_0=-ln(S_0)\] which is the negative of the natural log of the equilibrium survival \(S_0\), and can be thought of as a pre-recruit instantaneous mortality rate at equilibrium, and \[z_{\text{min}}=-ln(S_{\text{max}})=z_0(1-z_{\text{frac}})\] which is the negative of the natural log of the maximum pre-recruit survival rate (\(S_{\text{max}}\), the limit as spawning output approaches 0), and is parameterized as a function of \(z_{\text{frac}}\) (which represents the reduction in mortality as a fraction of \(z_0\)) so the expression is well-defined over the parameter range 0 < \(z_{\text{frac}}\) < 1.

Recruitment at age 0 for each year in the time series is calculated as: \[{ R_y = SB_ye^{\Big(-z_0 + (z_0-z_{min})\big(1-(SB_y/SB_0)^\beta \big)\Big)}e^{\tilde{R}_y}\qquad \tilde{R}_y\sim N(0;\sigma^2_R)}\] where \(SB_y\) is the spawning output in year y, \(\beta\) is a parameter controlling the shape of density-dependent relationship between relative spawning depletion \(SB_y/SB_0\) and pre-recruit survival (with limit \(\beta\) < 1), \(\tilde{R}_y\) is the recruitment in year \(y\), and \(\sigma_R\) is the standard deviation of recruitment in natural log space.

As implemented in Stock Synthesis, the parameters needed to apply the stock-recruitment relationship based on the pre-recruit survival are ln(\(R_0\)), \(z_{\text{frac}}\), and \(\beta\). The value of ln(\(R_0\)) defines the equilibrium quantities of \(SB_0\), \(S_0\), and \(z_0\) for a given set of biological inputs (either estimated of fixed), regardless of the values of \(z_{\text{frac}}\) and \(\beta\).

The interpretation of the quantity \(z_0=-ln(S_0)\) as pre-recruit instantaneous mortality rate at unfished equilibrium is imperfect because the recruitment in a given year is calculated as a product of the survival fraction \(S_y\) and the spawning output \(SB_y\) for that same time period so that there is not a 1-year lag between quantification of eggs or pups and recruitment at age 0, which is when recruits are calculated within Stock Synthesis. However, if age 0 or some set of the youngest ages is not selected by any fishery of survey, then density dependent survival may be assumed to occur anywhere before the first appearance of any cohort in the data or model expectations. In such cases, the upper limit on survival up to age \(a\) is given by \(S_{\text{max}}e^{-aM}\).

Nevertheless, interpreting \(z_0\) as an instantaneous mortality helps with the understanding of \(z_{\text{frac}}\). This parameter controls the magnitude of the density-dependent increase in survival associated with a reduction in spawning output. It represents the fraction by which this mortality-like rate is reduced as spawning output is reduced from \(SB_0\) to 0. This is approximately equal to the increase in survival as a fraction of the maximum possible increase in survival. That is: \[z_{\text{frac}}=\frac{ln(S_{\text{max}})-ln(S_0)}{-ln(S_0)} \approx \frac{S_{\text{max}}-S_0}{1-S_0}\] For example, if \(S_0 = 0.4\), \(z_{\text{frac}}=0.8\), then the resulting fraction increase in survival is \((S_{\text{max}}-S_0)/(1-S_0)=0.72\).

The parameter \(\beta\) controls the point where survival changes fastest as a function of spawning depletion. A value of \(\beta\) = 1 corresponds to a linear change in natural log survival and an approximately linear relationship between survival and spawning depletion. Values of \(\beta\)<1 have survival increasing fastest at low spawning output (concave decreasing survival) whereas \(\beta\)>1 has the increase in survival occurring the fastest closer to the unfished equilibrium (convex decreasing survival).

The steepness (\(h\)) of the spawner-recruit curve (defined as recruitment relative to \(R_{0}\) at a spawning depletion level of 0.2) based on pre-recruit survival can be derived from the parameters discussed above according to the relationship and associated inequality: \[h = 0.2e^{z_0z_{\text{frac}}(1-0.2^\beta)}<0.2e^{z_0}=\frac{1}{5S_0}=\frac{SB_0}{5R_0}\]

Unlike the Beverton-Holt stock-recruitment relationship, recruitment can increase above \(R_0\) for stocks that are below \(SB_0\) and thus the steepness is not fundamentally constrained below 1. However, in many cases, steepness will be limited well below 1 by the inequality above, which implies an inverse relationship between the maximum steepness and equilibrium survival. Specifically, the inequality above bounds steepness below 1 for all cases where \(S_0\) > 0.2, which are those with the lowest fecundity, an intuitively reasonable result. For example, with \(S_0\) = 0.4, the steepness is limited below 0.5, regardless of the choice of \(z_{\text{frac}}\) or \(\beta\). This natural limit on steepness may be one of the most valuable aspects of this stock-recruitment relationship.

Code for the survival based recruitment can be found in SS_recruit.tpl, search for “SS_Label_43.3.7 survival based”.

8.6.1.5 Shepherd


The Shepherd stock recruit curve is calculated as: \[R_y = \bigg(\frac{SB_y}{SB_0}\bigg)\frac{5h_{adj}R_0(1-0.2^c)}{(1-5h_{adj}0.2^c)+(5h_{adj}-1)(\frac{SB_y}{SB_0})^c}e^{-0.5b_y\sigma^2_R+\tilde{R}_y}\qquad \tilde{R}_y\sim N(0;\sigma^2_R)\] where \(c\) is the shape parameter for the stock recruitment curve, and \(h_{adj}\) is the transformed steepness parameter defined as: \[h_{adj}=0.2+\bigg(\frac{h-0.2}{0.8}\bigg)\bigg(\frac{1}{5*0.2^c}-0.2\bigg)\]

More details can be found in Punt and Cope (2019).

8.6.1.6 Ricker Re-parameterization


The Ricker stock recruit curve re-parameterized version. More details can be found in Punt and Cope (2019). \[R_y = R_0*(1-temp)*e^{ln(5h)(1-SB_y/SB_0)^{\gamma}/0.8^{\gamma}}\] where \(\gamma\) is the Ricker shape parameter and \(temp\) is defined as: \[temp = \begin{cases} 1-SB_y/SB_0 & \text{if $1-SB_y/SB_0 >$ 0 }\\ 0.001 & \text{if $1-SB_y/SB_0 \leq$ 0} \end{cases}\] where \(temp\) stabilizes recruitment at \(R_0\) if \(SB_y > SB_0\).

8.6.2 Spawner-Recruitment Parameter Setup

Read the required number of long parameter setup lines (e.g., LO, HI, INIT, PRIOR, PRIOR TYPE, SD, PHASE, ..., and BLOCK TYPE). These parameters are:

Value Label Description
8.5 \(ln(R_{0})\) Natural log of virgin recruitment level.
0.60 Steepness Steepness of spawner recruitment relationship, bound by 0.2 and 1.0 for the Beverton-Holt.
COND: srr = 5, 7, or 8
3rd Parameter Optional depending on which spawner-recruitment relationship function is used.
0.60 \(\sigma_R\) Standard deviation of natural log recruitment. This parameter has two related roles. It penalizes deviations from the spawner-recruitment curve, and it defines the offset between the arithmetic mean spawner-recruitment curve (as calculated from \(ln(R_{0})\) and steepness) and the expected geometric mean (which is the basis from which the deviations are calculated). Thus, the value of \(\sigma_R\) must be selected to approximate the true average recruitment deviation. See Tuning \(\sigma_R\) section below for additional guidance on how to tune \(\sigma_R\).
0 Regime Parameter This replaces the R1 offset parameter. It can have a block for the initial equilibrium year, so can fully replicate the functionality of the previous R1 offset approach. The SR regime parameter is intended to have a base value of 0.0 and not be estimated. Similar to cohort-growth deviation, it serves simply as a base for adding time-varying adjustments. This concept is similar to the old environment effect on deviates feature in v.3.24 and earlier.
0 Autocorrelation Autocorrelation in recruitment.

Example setup of the spawner-recruitment section:

LO HI INIT PRIOR <other entries> Block Fxn Parameter Label
3 31 8.81 10.3 ... 0 #SR_LN(R0)
0.2 1 0.61 0.70 ... 0 #SR_BH_steep
0 2 0.60 0.80 ... 0 #SR_sigmaR
-5 5 0 0 ... 0 #SR_regime
-99 99 0 0 ... 0 #SR_autocorr

8.6.3 Spawner-Recruitment Time-Varying Parameters

The \(R_{0}\), steepness, and regime shift parameters can be time-varying by blocks, trends, environmental linkages, or random deviations. Details on how to specify time-varying parameters can be found in the Time-Varying Parameter Specification and Setup section. However, not all of these options make sense for all parameters; please see additional details in the section on Time-Varying Stock-Recruitment Considerations.

8.6.4 Tuning \(\sigma_R\)

The \(\sigma_R\) value is typically not estimable and it is recommended practice to tune input \(\sigma_R\) values based on the variance in estimated recruitments post running SS3. The R package R code for Stock Synthesis (r4ss) designed to read and visualize SS3 model results provides recommendations on adjusting \(\sigma_R\) values in the sigma_R_info object in the list created by the r4ss::SS_output() function. An alternative \(\sigma_R\) value is provided based on equation:

\[\sigma_R^2 = Var(\hat{r}) + \overline{SE(\hat{r}_y)}^2\]

Simulation studies conducted by Methot and Taylor (2011) compared three methods of tuning \(\sigma_R\) with the above approach generally performed best since it accounts for variability among recruitment deviations and the uncertainty in their estimates.

8.6.5 Recruitment Deviation Setup

Control file continued:
Value Label Description
1 Do Recruitment Deviations This selects the way in which recruitment deviations are coded:
0: None (so all recruitments come from spawner recruitment curve).
1: Deviation vector (previously the only option): the deviations during the main period are encoded as a deviation vector that enforces them to sum to zero for this period.
2: Simple deviations: the deviations do not have an explicit constraint to sum to zero, although they still should end up having close to a zero-sum. The difference in model performance between options (1) and (2) has not been fully explored to date. This is the recommended option if doing mcmc (see the issue 107 in the admb GitHub Repository for more information on this).
3: Deviation vector (added in v.3.30.13) where the estimated recruitment is equal to the \(R_{0}\) adjusted for blocks multiplied by a simple deviation vector of unconstrained deviations. The negative log likelihood from the deviation vector is equal to the natural log of the estimated recruitment divided by the expected recruitment by year adjusted for the spawner-recruit curve, regimes, environmental parameters, and bias-adjustment. The negative log likelihood between option 2 and 3 is approximately equal.
4: Similar to option 3 but includes a penalty based on the sum of the deviations (added in v.3.30.13).
1971 Main recruitment deviations begin year If begin year is less than the model start year, then the early deviations are used to modify the initial age composition. However, if set to be more than the population maximum age before start year, it is changed to equal to the maximum age before start year.
2017

Main recruitment deviations end year

The final year to estimate main recruitment deviations should be set to a year when information about young fish in the data becomes limited. For example, if the model end year is 2020 and the fleet/survey only starts observing fish of age 2+, the last year to estimate main recruitment deviations could be set to 2018. Years after the main period but before the end model year will be estimated as late deviations. If recruitment deviations end year is later than retro year, it is reset to equal retro year.
3 Main recruitment deviations phase.
1 Advanced 0: Use default values for advanced options
Options 1: Read values for the 11 advanced options.
COND = 1 Beginning of advanced options
1950 Early Recruitment Deviations Start Year:
0: skip (default);
+ year: absolute year (must be less than begin year of main recruitment deviations); and
-integer: set relative to main recruitment deviations start year.
Note: because this is a deviation vector, it should be long enough so that recruitment deviations for individual years are not unduly constrained.
6 Early Recruitment Deviations Phase:
Negative value: default value to not estimate early deviations.
Users may want to set to a late phase if there is not much early data.
0 :
0 = Default value.
Forecast recruitment deviations always begin in the first year after the end of the main recruitment deviations. Recruitment in the forecast period is deterministic derived from the specified stock-recruitment relationship. Setting their phase to 0 causes their phase to be set to max lambda phase +1 (so that they become active after rest of parameters have converged.). However, it is possible here to set an earlier phase for their estimation, or to set a negative phase to keep the forecast recruitment deviations at a constant level.
1 Forecast Recruitment Deviations Lambda:
1 = Default value.
This lambda is for the log likelihood of the forecast recruitment deviations that occur before endyr + 1. Use a larger value here if solitary, noisy data at end of time series cause unruly recruitment deviation estimation.
1956 Last year with no bias adjustment.
1970 First year with full bias adjustment.
2001 Last year with full bias adjustment.
2002 First recent year with no bias adjustment.
These four entries control how the bias adjustment is phased in and then phased back out when the model is searching for the maximum log likelihood. Bias adjustment is automatically turned off when in mcmc mode. For intervening years between the first and second years in this list, the amount of bias adjustment that will be applied is linearly phased in. The first year with full bias adjustment should be a few years into the data-rich period so that the model will apply the full bias-correction only to those recruitment deviations that have enough data to inform the model about the full range of recruitment variability. Defaults for the four-year values: start year - 1000, start year - Nages, main recruitment deviation final year, end year + 1. See Recruitment Likelihood with Bias Adjustment for more information.
0.85 Max Bias Adjustment:
> 0: value for the maximum bias adjustment during the mle mode.
-1: bias adjustment set to 1.0 for all years, including forecast, with estimated recruitment deviations (similar to mcmc).
-2: bias adjustment set to 1.0 for all years from start to end model year, bias adjustment set to 0 for the forecast period
-3: bias adjustment set to 0 for all model and forecast years.
0 Period For Recruitment Cycles:
Use this when the model is configured to model seasons as years and there is a need to impose a periodicity to the expected recruitment level. If value is > 0, then read that number of full parameter lines below define the recruitment cycle. See more information on setting up seasons as years model in the continuous seasonal recruitment section.
-5 Minimum Recruitment Deviation: Min value for recruitment deviation.
Negative phase = Default value.
5 Maximum Recruitment Deviation: Max value for recruitment deviation.
Late Phase = Default value (e.g., 5)
5 Number of Explicit Recruitment Deviations to Read:
0: Default, do not read any recruitment deviations; Integer: read this number of recruitment deviations.
COND = If N recruitment cycle is > 0, enter N full parameter lines below.
<parameter line> Full parameter line for each of the N periods of recruitment cycle.
COND = If the number of recruitment deviations to read is > 0, then enter a total
number of lines equal to that specified above. A year and a deviation value is expected as:
2010 1.27 Enter year and deviation.
2011 -0.45 Two example recruitment deviations being read. Note: the model will rescale the entire vector of recruitment deviation after reading these deviations, so by reading two positive values, all other recruitment deviations will be scaled to a small negative value to achieve a sum to zero condition before starting model estimation.
2012 -0.25
2013 0.67
2014 0.20
2015 -0.11
End of advanced options

8.6.5.1 Recruitment Eras


Conceptually, the model treats the early, data-poor period, the main data-rich period, and the recent/forecast time period as three eras along a continuum. The user has control of the break year between eras. Each era has its own vector. The early era is defined as a vector (prior to V3.10 this was a deviation vector) so it can have zeros during the earliest years not informed by data and then a few years with non-zero values without imposing a zero-centering on this collection of deviations. The main era can be a vector of simple deviations, or a deviation vector, but it is normally implemented as a deviation vector so that the spawner-recruitment function is its central tendency. The last era does not force a zero-centered deviation vector so that it can have zeros during the actual forecast and non-zero values in last few years of the time series. The early and last eras are optional, but their use can help prevent balancing a preponderance of negative deviations in early years against a preponderance of positive deviations in later years. When the 3 eras are used, it would be typical to turn on the main era during an early model phase, turn on the early era during a later phase, then have the last era turn on in the final phase.

8.6.5.2 Recruitment Likelihood with Bias Adjustment


For each year in the total recruitment deviation time series (early, mid, late/forecast) the contribution of that year to the log likelihood is equal to: \(dev^2/(2.0*\sigma^2_R)+offset*ln(\sigma_R)\) where offset is the recruitment bias adjustment between the arithmetic and geometric mean of expected recruitment for that year. With this approach, years with a zero or small offset value do not contribute to the second component. \(\sigma_R\) may be estimable when there is good data to establish the time series of recruitment deviations.

The implemented recruitment bias adjustment is based upon the work documented in Methot and Taylor (2011) and following the work of Mark N. Maunder and Deriso (2003). The concept is based upon the following logic. The \(\sigma_R\) represents the true variability of recruitment in the population. It provides the constraining penalty for the estimates of recruitment deviations, and it is not affected by data. Where data that are informative about recruitment deviations are available, the total variability in recruitment, \(\sigma_R\), is partitioned into a signal (the variability among the recruitment estimates) and the residual, the variance of each recruitment estimate calculated as:

\[SE(\hat{r}_y)^2 + SD(\hat{r})^2=\Bigg( \bigg( \frac{1}{\sigma^2_d}+\frac{1}{\sigma^2_R}\bigg)^{-1/2}\Bigg)^2+\Bigg( \frac{\sigma^2_R}{(\sigma^2_R+\sigma^2_d)^{1/2}}\Bigg)^2=\sigma^2_R\]

Where there are no data, no signal can be estimated and the individual recruitment deviations collapse towards 0.0 and the variance of each recruitment deviation approaches \(\sigma_R\). Conversely, where there highly informative data about the recruitment deviations, then the variability among the estimated recruitment deviations will approach \(\sigma_R\) and the variance of each recruitment deviation will approach zero. Perfect data will estimate the recruitment time series signal perfectly. Of course, we never have perfect data, so we should always expect the estimated signal (variability among the recruitment deviations) to be less than the true population recruitment variability.

The correct offset (bias adjustment) to apply to the expected value for recruitment is based on the concept that a time series of estimated recruitments should be mean unbiased, not median unbiased, because the biomass of a stock depends upon the cumulative number of recruits, which is dominated by the large recruitments. The degree of offset depends upon the degree of recruitment signal that can be estimated. Where no recruitment signal can be estimated, the median recruitment is the same as the mean recruitment, so no offset is applied. Where log-normal recruitment signal can be estimated, the mean recruitment will be greater than the median recruitment. The value:

\[b_y=\frac{E\Big( SD(\hat{r}_y)\Big)^2}{\sigma^2_R}=1-\frac{SE(\hat{r}_y)^2}{\sigma^2_R}\]

of the offset then depends upon the partitioning of \(\sigma_R\) into between and within recruitment variability. The most appropriate degree of bias adjustment can be approximated from the relationship among \(\sigma_R\), recruitment variability (the signal), and recruitment residual error. Because the quantity and quality of data varies during a time series, the user can control the rate at which the offset is ramped in during the early, data-poor years, and then ramped back to zero for the forecast years. On output, the mean bias adjustment during the early and main eras is calculated, comparing this value to the rmse of estimated recruitment deviations in the Report.sso file. A warning is generated if the rmse is small and the bias adjustment is larger than 2.0 times the ratio of \(RMSE^2\) to \(\sigma^2_R\). Additional information on recruitment bias adjustment can be found in the Recruitment Variability and Bias Correction section.

In mcmc mode, the model still draws recruitment deviations from the log-normal distribution, so the full offset is used such that the expected mean recruitment from this log-normal distribution will stay equal to the mean from the spawner-recruitment curve. When the model reaches the mcmc and MCEVAL phases, all bias adjustment values are set to 1.0 for all active recruitment deviations because the model is now re-sampling from the full log-normal distribution of each recruitment.

8.6.5.3 Recruitment Cycle


When the model is configured such that seasons are modeled as years, the concept of season within year disappears. However, there may be reason to still want to model a repeating pattern in expected recruitment to track an actual seasonal cycle in recruitment. If the recruitment cycle factor is set to a positive integer, this value is interpreted as the number of time units in the cycle and this number of full parameter lines will be read. The cyclic effect is modeled as an exp(p) factor times \(R_{0}, s\)o a parameter value of 0.0 has nil effect. In order to maintain the same number of total recruits over the duration of the cycle, a penalty is introduced so that the cumulative effect of the cycle produces the same number of recruits as \(\text{Ncycles}*R_{0}\). Because the cyclic factor operates as an exponential, this penalty is different from a penalty that would cause the sum of the cyclic factors to be 0.0. This is done by adding a penalty to the parameter likelihood.

8.6.5.4 Initial Age Composition


A non-equilibrium initial age composition is achieved by setting the first year of the recruitment deviations before the model start year. These pre-start year recruitment deviations will be applied to the initial equilibrium age composition to adjust this composition before starting the time series. The model first applies the initial \(F\) level to an equilibrium age composition to get a preliminary N-at-age vector and the catch that comes from applying the \(F\)s to that vector, then it applies the recruitment deviations for the specified number of younger ages in this vector. If the number of estimated ages in the initial age composition is less than maximum age, then the older ages will retain their equilibrium levels. Because the older ages in the initial age composition will have progressively less information from which to estimate their true deviation, the start of the bias adjustment should be set accordingly.

8.7 Fishing Mortality Method

The implementation and reporting of fishing mortality, \(F\), in SS3 is more complex than in simpler models and can be confusing. The description provides an overview of the ways in which \(F\) is calculated, used, and reported.

8.7.0.1 Nomenclature


The nomenclature below ignores sex, morphs and areas for simplicity. The quantities associated with \(F\) calculations are defined as:The quantities associated with \(F\) calculations are defined as:

Terminology and reporting of \(\text{ann}F\) and \(F\text{\_std}\) has been slightly revised for clarity in v.3.30.15.00 and the description here follows the new conventions.

8.7.0.2 F_Method


There are two approaches for estimating fishing mortality (\(F\)) in Stock Synthesis. One provides a mid-season harvest rate using Pope’s approximation, the other provides a season-long \(F\) rate using the Baranov catch equation. Pope’s harvest rate is implemented as F_Method = 1. The subsequent options model \(F\) as a season-long rate using the Baranov catch equation.

The \(F_{hyb}\) approach works by:

  1. Applying the Pope’s method to get a harvest rate for each fleet in the current season;

  2. Converting that harvest rate to the equivalent \(F_{hyb}\). This is exact with one fleet and approximate with multiple fleets;

  3. Adjusting those \(F_{hyb}\) values over a fixed number of iterations (2-4) using the ratio of observed to calculated catch for each fleet; and

  4. Proceeding with those \(F_{hyb}\) values into subsequent model steps. \(F_{par}\) and \(F_{hyb}\) values are used in the same Baranov catch equation and a catch log likelihood is calculated based on the observed retained catch and the annual standard error of the catch.

Some notes:

Control file continued:
Typical Value Description and Options
0.2 \(F\) ballpark
This value is compared to the sum of the \(F\)’s for the specified year (defined on the next line). The sum is over all seasons and areas. In older versions of SS3, the lambda was automatically set to 0.0 in the final phase, the user now has control over whether to reduce the lambda in later phases.
-1990 \(F\) ballpark year
Negative value disable \(F\) ballpark.
3 F_Method
1 = Pope’s (discrete);
2 = Baranov (continuous) \(F\) as a parameter;
3 = Hybrid \(F\); and
4 = Fleet-specific parameter/hybrid \(F\) (recommended).
2.9 Maximum \(F\)
This maximum is applied within each season and area. A value of 0.9 is recommended for F_Method 1, and a value of about 4 is recommended for F_Methods 2 and 3.
COND: F_Method = 1, no additional input for Pope’s approximation.
COND: F_Method = 2:
0.10 1 0 Initial \(F\) value, phase, and the number of \(F\) detail setup lines to read (example has 0).
For phases prior to the phase of the \(F\) value becoming active, the hybrid option will be used and the \(F\) values so calculated become the starting values for the \(F\) parameters when this phase is reached.
If F_Method = 2 and N for \(F\) detail is > 0
1 1980 1 0.20 0.05 4 Fleet, year, season, \(F\), se, phase - these values override the catch se values in the data file and the overall starting \(F\) value and phase read just above.
COND: F_Method = 4
Read list of fleets needing parameters, starting \(F\) values, and phases. To treat a fleet \(F\) as hybrid only either select a phase of 99 or do not enter a parameter line for that fleet. A parameter line is not required for all fleets and if not specified will be treated as hybrid across all phases, except for bycatch fleets which are required to have an input parameter line. Use a negative phase to set \(F\) as constant (i.e., not estimated) in v.3.30.19 and higher.
Fleet Parameter Value Phase
1 0.05 2
2 0.01 1 # bycatch fleet
-9999 1 1
COND: F_Method = 3 or 4
4 Number of tuning iterations in hybrid fleets. A value of 3 is sufficient with a single fleet and low \(F\)s. A value of 5 or so may be needed to match the catch near exactly when there are many fleets and high \(F\).

8.7.1 Equilibrium SPR

\(SPR_y\) is the equilibrium spawning biomass per recruit that would result from fishing at the current year’s \(F\) level and selectivity. It is a measure of the expected long-term effect of fishing. It is calculated as the ratio of the equilibrium reproductive output per recruit that would occur with the current year’s \(F\) intensities and biology, to the equilibrium reproductive output per recruit that would occur with the current year’s biology and no fishing. Thus, it internalizes all seasonality, movement, weird selectivity patterns, and other factors. Because this index moves in the opposite direction than \(F\) itself, it is usually reported as \(1-SPR\). A benefit of this index is that it is a direct measure of common proxies used for \(F_{MSY}\), such as \(F_{40\%}\). A shortcoming of this index is that it does not directly demonstrate the fraction of the stock that is caught each year. The spr value is also calculated in the benchmarks (see below).

The derived quantities report shows an annual spr statistic. The options, as specified in the starter.ss file, are:

8.7.2 Initial Fishing Mortality

Read a short parameter setup line for each fishery and season when there is non-zero equilibrium catch in a season for the fleet (equilibrium catches are input in the catch section of the data file). The parameters are the fishing mortalities for the initial equilibrium catches. Do not try to estimate parameters for fisheries with zero initial equilibrium catch - no parameter line is needed fleets and seasons with zero equilibrium catch.

If there is catch, then give a starting value greater than zero, and it generally is best to estimate the parameter in phase 1. The initial \(F\) parameter lines are ordered as shown in the example below - by season, then within a season, by fleet.

If the initial equilibrium catch is near msy, than a logical inconsistency may occur as documented in the Equilibrium Recruitment section.

It is possible to use the initial \(F\) method to achieve an estimate of the initial equilibrium \(Z\) in cases where the initial equilibrium catch is unknown. To do this requires 2 changes to the input files:

  1. Data File: Include a positive value for the initial equilibrium catch for at least one fleet, often with a higher standard error depending upon the situation.

  2. Control File: Add an initial \(F\) parameter line (short parameter line) for each fleet and season with initial equilibrium catch to be estimated immediately after the Fishing Mortality setup. It will be influenced by the early age and size comps which should have some information about the early levels of \(Z\).

An example setup with two fishery fleets and two seasons with initial equilibrium catches:
0.1 # \(F\) ballpark
-2001 # \(F\) ballpark year (negative value to disable)
3 # F_method: 1=Pope; 2=Baranov; 3=Hybrid
3 # Maximum \(F\) value
4 # Number of iterations for tuning \(F\) in hybrid method
# Initial \(F\) parameters
LO HI INIT Prior Pr. sd Pr. Type Phase Label
0 3 0.1 0 99 0 1 #InitF_seas_1_fleet_1
0 3 0.1 0 99 0 1 #InitF_seas_1_fleet_2
0 3 0.1 0 99 0 1 #InitF_seas_2_fleet_1
0 3 0.1 0 99 0 1 #InitF_seas_2_fleet_2

8.8 Catchability

Catchability is the model process that converts selected numbers or biomass for a fleet into the expected value for a survey or cpue by that fleet. In SS3, the concept has been extended so that, for example, a time series of an environmental factor could be treated as a survey of the time series of deviations for some parameter. Thus, the concept of catchability is expanded to include a family of link functions beyond simple proportionality.

Collectively, all factors that need a catchability are considered to be an index. Note that the catchability options entered in this section of the control file have a strong relationship with the index units (i.e., biomass vs. numbers vs. deviations) and index error type (i.e., normal vs. T-distribution) as entered in the data file.

For each fleet that has index observations, enter a row with six entries as described below. Fleets with no index observations are omitted.

  1. Fleet Number

  2. Link type or index of deviation vector: An assumed functional form between Q, the expected value, and the survey observation. See example Q parameter lines below for an example set up for link types 4-6.

    1. 1 = simple Q, e.g., proportional: \(y=Q*x\) where \(x\) is a model quantity, like selected biomass, and \(y\) is the expected value for the index.

    2. 2 = mirror simple Q - this will mirror the Q value from another fleet. Mirror in Q must refer to a lower number fleet relative to the fleet with the mirrored Q (example: fleet 3 mirror fleet 2). Requires a Q parameter line for the fleet but will not be used.

    3. 3 = power function to allow non-linearity in the expected value for the index: \(y=Qx^{(1+c)}\). Therefore, \(c > 0\) leads to hyper-stability and \(c < 0\) leads to hyper-depletion. The power function option cannot be used with deviation surveys (type 35 or 36) because of negative values in the deviation vectors.

    4. 4 = mirror Q with scale (2 parameter lines required) with the second parameter (scale) being added to the Q being mirrored. The mirrored Q will be reported as base Q + scale value. Mirror in Q must refer to a lower number fleet relative to the fleet with the mirrored Q. An example usage is when the cpue of a fleet in one area is assumed to have the same catchability as the cpue for a comparable fleet operating in another area. The scale parameter could be used to adjust for the relative spatial areas because Q relates cpue to area-specific population abundance, not population density per area.

    5. 5 = offset (2 parameter lines required). This option, new with v.3.30.23, requires a second parameter that is added to the expected values before multiplying Q. This is most useful for survey types 35 and 36, even if those survey input vectors have already been zero centered. \(y=Q*(x+B)\)

    6. 6 = offset and power (3 parameter lines required). This requires 2 parameters in addition to the Q parameter. The first is for the offset and the second is for the power, \(y=Q*(x+b)^{(1+c)}\).

    1. > 0 = mirror the Q from another (lower numbered survey designated by referencing the fleet number).

    2. If an index of a deviation vector , option 35, use this column to enter the index of the deviation vector to which the index is related. For example, if only using one index in your model, this number would be 1.

    3. If a depletion survey, option 34, approach is being applied the following values in this column determines how phases are adjusted:

      • 0 = add 1 to phases of all parameters. Only \(R_{0}\) active in new phase 1. Mimics the default option of previous model versions;

      • 1 = only \(R_{0}\) active in phase 1. Then finish with no other parameters becoming active; useful for data-limited draws of other fixed parameters. Essentially, this option allows the model to mimic dbsra; and

      • 2 = no phase adjustments, can be used when profiling on fixed \(R_{0}\).

  3. Extra_se: Estimable extra standard error for an index

    1. 0 = skip (typical); and

    2. 1 = include a parameter that will contain a value to be added to the input standard deviation of the survey variability.

  4. Bias_adj: Adjusts for log-normal bias when using an informative prior on Q. NOTE: Bias adjustment to Q is ONLY applied when also using the float approach. See below for Q float specifications. An expanded bias adjustment approach is under development.

    1. 0 = no bias adjustment applied; and

    2. 1 = apply bias adjustment. Bias correction will be applied to the estimated Q value.

  5. Float: Allows Q parameter to be automatically calculated to maintain no deviation between the index and the expected value. Only available for link = 1.

    1. 0 = no float, parameter is estimated; and

    2. 1 = float, analytical solution is used, but parameter line still required.

    3. Additional information regarding the use of and appropriate application of float in Q can be found in the Float Q section below.

For a setup with a single survey, the Q setup matrix could be:
Fleet Link Link Extra Bias
Num. Type Info sd Adjust Float Label
3 1 0 1 1 0 #Survey
-9999 0 0 0 0 0 #End Read
LO HI INIT <other entries> PHASE <other entries> Block Fxn Parameter Label
-5 5 -0.12 ... 1 ... 0 #Survey1 LnQ base
0 0.5 0.1 ... -1 ... 0 #Survey1 Extra sd

If the Q base parameter specifies that it is time-varying by the annual deviation method, short parameter lines to specify the specifications of the deviation vector come after all the base Q parameters.

8.8.1 Example Q parameter lines

Below is an example setup for fleets with a mirrored Q and scale from another fleet (link type = 4), scale from another fleet with offset (link type = 5), and scale from another fleet with offset and power (link type = 5):

For a setup with a single survey, the Q setup matrix could be:
Fleet Link Link Extra Bias
Num. Type Info sd Adjust Float Label
1 1 0 1 0 0 #Fleet 1
2 4 1 0 0 0 #Fleet 2
3 5 0 0 0 0 #Fleet 3 using Q with offset parameter
4 6 0 0 0 0 #Fleet 4 using Q with offset and power parameters
-9999 0 0 0 0 0 #End Read
A long parameter line is expected for each link parameter (i.e., Q)
LO HI INIT <other entries> PHASE <other entries> Block Fxn Parameter Label
-7 5 0.51 ... 1 ... 0 #Fleet 1 LnQ base
0 0.5 0.1 ... -1 ... 0 #Fleet 1 Extra sd
-7 5 -6 ... -1 ... 0 #Fleet 2 LnQ base
-8 5 -7 ... -1 ... 0 #Fleet 2 Mirror Q offset
-15 15 0.84 ... 1 ... 0 #Fleet 3 Q base
-15 15 0 ... 1 ... 0 #Fleet 3 Q offset
-10 10 0.74 ... 1 ... 0 #Fleet 4 Q base
-10 10 0 ... 1 ... 0 #Fleet 4 Q offset
0.2 3.0 0.5 ... 3 ... 0 #Fleet 4 Q power

8.8.2 Float Q

The use and development of float in Q has evolved over time within SS3. The original approach in earlier versions of SS3 (version 3.24 and before) is that with Q “float” the units of the survey or fishery cpue were treated as dimensionless so the Q was adjusted within each model iteration to maintain a mean difference of 0.0 between observed and expected (usually in natural log space). In contrast, Q as a parameter (float = 0) one had the ability to interpret the absolute scaling of Q and put a prior on it to help guide the model solution. Also, with Q as a parameter the code allowed for Q to be time-varying.

Then midway through the evolution of the v.3.24 code lineage a new Q option was introduced based on user recommendations. This option allowed Q to float and to compare the resulting Q value to a prior, hence the information in that prior would pull the model solution in direction of a floated Q that came close to the prior.

Currently, in v.3.30, that float with prior capability is fully embraced. All fleets that have any survey or cpue options need to have a catchability specification and get a base Q parameter in the list. Any of these Q’s can be either:

Q relates the units of the survey or cpue to the population abundance, not the population density per unit area. But many surveys and most fishery cpue is a proportional to mean fish density per unit area. This does not have any impact in a one area model because the role of area is absorbed into the value of Q. In a multi-area model, one may want to assert that the relative difference in cpue between two areas is informative about the relative abundance between the areas. However, cpue is a measure of fish density per unit area, so one may want to multiply cpue by area before putting the data into the model so that asserting the same Q for the two areas will be informative about relative abundance.

In v.3.30.13, a new catchability option has been added that allows Q to be mirrored and to add an offset to ln(Q) of the primary area when calculating the ln(Q) for the dependent area. The offset is a parameter and, hence, can be estimated and have a prior. This option allows the cpue data to stay in density units and the effect of relative stock area is contained in the value of the ln(Q) offset.

8.8.3 Catchability Time-Varying Parameters

Time-Varying catchability can be used. Details on how to specify time-varying parameters can be found in the Time-Varying Parameter Specification and Setup section.

8.8.4 Q Conversion Issues Between Stock Synthesis v.3.24 and v.3.30

In v.3.24 it was common to use the deviation approach implemented as if it was survey specific blocks to create a time-varying Q for a single survey. In some cases, only one year’s deviation was made active in order to implement, in effect, a block for Q. The transition executable (ss_trans.exe) cannot convert this, but an analogous approach is available in v.3.30 because true blocks can now be used, as well as environmental links and annual deviations. Also note that deviations in v.3.24 were survey specific (so no parameter for years with no survey). In v.3.30, deviations are always year-specific, so you might have a deviation created for a year with no survey.

8.9 Selectivity and Discard

For each fleet and survey, read a definition line for size selectivity and retention.

Example Setup for Selectivity:
Size Selectivity:
Pattern Discard Male Special Label
1 2 0 0 #Fishery1
1 0 0 0 #Survey1
0 0 0 0 #Survey2
Age Selectivity:
Pattern Discard Male Special Label
11 0 0 0 #Fishery1
11 0 0 0 #Survey1
11 0 0 0 #Survey2

8.9.0.1 Pattern


Specify the size/age selectivity pattern. See the Selectivity Pattern section for user options.

8.9.0.2 Discard


Discard options:

8.9.0.3 Male


Male specific selectivity options:

8.9.0.4 Special


Special (0/value): This value is used in different ways depending on the context. If the selectivity type is to mirror another selectivity type, then put the index of that source fleet or survey here. It must refer to a lower numbered fleet/survey. If the selectivity type is 6 (linear segment), then put the number of segments here. If the selectivity type is 7, then put a 1 here to keep selectivity constant above the mean average size for old fish of morph 1. If selectivity type is 27 (cubic spline), then put the number of nodes (knots) here.

8.9.0.5 Age Selectivity


For each fleet and survey, read a definition line for age selectivity. The 4 values to be read are the same as for the size-selectivity.

As of v.3.30.15, for some selectivity patterns the user can specify the minimum age of selected fish. Most selectivity curves by default select age 0 fish (i.e., inherently specify the minimum age of selected fish as 0). However, it is fairly common for the age bins specified in the data file to start at age 1. This means that any age 0 fish selected are pooled up into the age 1’ bin, which will have a detrimental effect on fitting age-composition data. In order to prevent the selection of age 0 (or older) fish, the user can specify the minimum selected age for some selectivity patterns (12, 13, 14, 16, 18, 26, or 27) in versions of v.3.30.15 and later. For example, if the minimum selected age is 1 (so that age 0 fish are not selected), selectivity pattern type can be specified as 1XX, where XX is the selectivity pattern. A more specific example is if selectivity is age-logistic and the minimum selected age desired is 1, the selectivity pattern would be specified as 112 (the regular age-logistic selectivity pattern is option 12). The user can also select higher minimum selected ages, if desired; for example, 212 would be the age-logistic selectivity pattern with a minimum selected age of 2 (so that age 0 and 1 fish are not selected).

8.9.1 Reading the Selectivity and Retention Parameters

Read the required number of parameter setup lines as specified by the definition lines above. The complete order of the parameter setup lines is:

  1. Size selectivity for fishery 1;

  2. Retention for fishery 1 (if discard specified);

  3. Discard Mortality for fishery 1 (if discard specified);

  4. Male offsets for size selectivity for fishery 1 (if offsets used);

  5. ;

  6. Age selectivity for fishery 1;

  7. Retention for fishery 1 (if discard specified);

  8. Discard Mortality for fishery 1 (if discard specified);

  9. Male offsets for age selectivity for fishery 1 (if offsets used); and

  10. .

The list of parameters to be read from the above setup would be:
LO HI INIT PRIOR <other entries> Block Fxn Parameter Label
19 80 53.5 50 ... 0 #SizeSel p1 fishery 1
0.01 60 18.9 15 ... 0 #SizeSel p2 fishery 1
20 70 38.6 40 ... 0 #Retain_L_infl_fishery 1
0.1 10 6.5 1 ... 0 #Retain_L_width_fishery 1
0.001 1 0.98 1 ... 0 #Retain_L_asymptote_logit_fishery 1
-10 10 1 0 ... 0 #Retain_L_maleoffset_fishery 1
0.1 1 0.6 0.6 ... 0 #DiscMort_L_infl_fishery 1
-2 2 0 0 ... 0 #DiscMort_L_width_fishery 1
20 70 40 40 ... 0 #DiscMort_L_level_old_fishery 1
0.1 10 1 1 ... 0 #DiscMort_L_male_offset_fishery 1
19 80 53.5 50 ... 0 #SizeSel p1 survey 1
0.01 60 18.9 15 ... 0 #SizeSel p2 survey 1
0 40 0 5 ... 0 #AgeSel p1 fishery 1
0 40 40 5 ... 0 #AgeSel p2 fishery 1
0 40 0 5 ... 0 #AgeSel p1 survey 1
0 40 40 5 ... 0 #AgeSel p2 survey 1
0 40 0 5 ... 0 #AgeSel p1 survey 2
0 40 0 5 ... 0 #AgeSel p2 survey 2

8.9.2 Selectivity Patterns

The currently defined selectivity patterns, and corresponding required number of parameters, are:

SIZE BASED SELECTIVITY
Pattern N Parameters Description
0 0 Selectivity = 1.0 for all sizes.
1 2 Logistic selectivity.
2 6 Older version of selectivity pattern 24 (double normal with peak and tail controls) for backward compatibility in treatment of sex-specific scaling.
5 2 Mirror selectivity. The two parameters select bin range.
6 2 + N breaks Older non-parametric size selectivity (see also Patterns 21 and 43).
8 8 Double logistic selectivity, with defined peak, uses smooth joiners; special = 1 causes constant selectivity above \(L_{inf}\) for morph 1. Recommend using pattern 24 instead.
9 6 Simple double logistic selectivity with no defined peak.
11 2 Selectivity = 1.0 for a specified length-bin range.
15 0 Mirror another selectivity (same for age selectivity).
21 2 Newer non-parametric size selectivity (see also Patterns 6 and 43).
22 4 Double normal selectivity; similar to casal.
23 6 Same as the selectivity pattern 24 (double normal with peak and tail controls) except the final selectivity is now directly interpreted as the terminal selectivity value. Cannot be used with Pope’s \(F\) method because maximum selectivity may be greater than 1.
24 6 Double normal selectivity with defined initial and final selectivity level - Recommended option.
25 3 Exponential logistic selectivity.
27 3 + 2*N nodes Cubic spline selectivity with N nodes.
42 5 + 2*N nodes Selectivity pattern 27 (cubic spline) but with 2 additional scaling parameters.
43 4 + N breaks Selectivity pattern 6 (non-parametric) but with 2 additional scaling parameters (see also Patterns 6 and 21).
AGE BASED SELECTIVITY
Pattern N Parameters Description
0 0 Selectivity = 1.0 for ages 0+.
10 0 Selectivity = 1.0 for all ages beginning at age 1. If it is desired that age-0 fish be selected, then use selectivity pattern 11 and set minimum age to 0.0.
11 2 Selectivity = 1.0 for a specified age range.
12 2 Logistic selectivity.
13 8 Double logistic selectivity, IF joiners. Use discouraged. Use selectivity pattern 18 (double logistic selectivity) instead.
14 N ages + 1 Separate parameter for each age (empirical), value at age is \(\frac{1}{1+exp(-x)}\).
15 0 Mirror another age-specific selectivity pattern.
16 2 Coleraine single Gaussian Selectivity.
17 N ages + 1 or special + 1 Empirical as a random walk from previous age.
18 8 Double logistic selectivity, with defined peak, uses smooth joiners.
19 6 Simple double logistic selectivity with no defined peak.
20 6 Double normal selectivity with defined initial and final level. Recommended option.
26 3 Exponential logistic selectivity.
27 3 + 2*N nodes Cubic spline selectivity in age based on N nodes.
41 2 + N ages + 1 Selectivity pattern 17 (random walk) but with 2 additional scaling parameters.
42 5 + 2*N nodes Selectivity pattern 27 (cubic spline) but with 2 additional scaling parameters.
44 4 + N ages Selectivity pattern 17 (random walk) but with separate parameters for males and females and with revised controls.
45 4 + N ages Selectivity pattern 14 (revise age) but with separate parameters for males and females and with revised controls.

8.9.2.1 Special Selectivity Options


Special selectivity options (type 30 in size based selectivity) are no longer specified within the control file. Specifying the use of one of these selectivity types is now done within the data file by selecting the survey “units” (see the section on Index units).

8.9.3 Selectivity Pattern Details

8.9.3.1 Pattern 1 (size) and 12 (age) - Simple Logistic


Logistic selectivity for the primary sex (if selectivity varies by sex) is formulated as: \[S_l = \frac{1.0}{1+exp(-ln(19)(L_l - p1)/p2)}\] where \(L_l\) is the length bin. If age based selectivity is selected then the length bin is replaced by the age vector. If sex specific selectivity is specified the non-primary sex the p1 and p2 parameters are estimated as offsets. Note that with a large p2 parameter, selectivity may not reach 1.0 at the largest size bin. The parameters are:

8.9.3.2 Pattern 2 (size) - Older version of selectivity pattern 24 for backward compatibility


Pattern 2 differs from pattern 24 only in the treatment of sex-specific offset parameter 5. See note in Male Selectivity Estimated as Offsets from Female Selectivity for more information. Pattern 24 was changed in v.3.30.19 with the old parameterization now provided in Pattern 2.

8.9.3.3 Pattern 5 (size) - Mirror Selectivity


Two parameters select the min and max bin number (not min max size) of the source selectivity pattern. If first parameter has value \(<=\) 0, then interpreted as a value of 1 (e.g., first bin). If second parameter has value \(<=\) 0, then interpreted as maximum length bin (e.g., last bin specified in the data file). The mirrored selectivity pattern must be from a lower fleet number (e.g., already specified before the mirrored fleet).

8.9.3.4 Pattern 6 (size) - Older non-parametric Selectivity


Non-parametric size selectivity uses a set of linear segments. The first break point is at Length = p1 and the last break point is at Length = p2. The total number of break points is specified by the value of the Special factor in the selectivity setup, so the N intervals is one less than the number of break points. Intermediate break points are located at equidistant intervals between p1 and p2. Parameters 3 to N are the selectivity values at the break points, entered as logistic, e.g., \(1/(1+exp(-x))\). Ramps from \(exp(-10)\) to p3 if L < p1. Constant at Np if L > p2. Note that prior to version 3.03 the break points were specified in terms of bin number, rather than length.

See also Pattern 21 for a newer non-parametric size selectivity option, and Pattern 43 which is based on Pattern 6 with additional controls over the scaling.

8.9.3.5 Pattern 8 (size) and 18 (age) - Double Logistic Selectivity


Users are discouraged from using the double logistic selectivity. The double normal selectivity pattern (size pattern 24, age pattern 20) provides similar functionality but with only 6 parameters.

8.9.3.6 Pattern 11 (size or age) - Selectivity = 1.0 for range


Length- or age-selectivity can be set equal to 1.0 for a range of lengths or ages. Like other selectivity types, it is specified in terms of the population, not the data bins.

For age-based selectivity, parameters p1 and p2 are set in terms of population age. For example, p1 = 0 and p2 = 4 would mean selectivity of 1.0 for age-0, age-1, age-2, age-3, and age-4 fish. These parameters must be less than or equal to the maximum age, not the maximum age bin. All ages before and after p1 and p2 have selectivity equal to 0.

If the selectivity is length-based, the input parameters should match the population length bin number that will have selectivity = 1.0. A simple example how this works is as follows:

Population Length Bin # 1 2 3 4 5 6 7 8
Population Length (cm) 10 12 14 16 18 20 22 24

8.9.3.7 Pattern 14 (age) - Revise Age


Age-selectivity pattern 14 to allow selectivity-at-age to be the same as selectivity at the next younger age. When using this option, the range on each parameter should be approximately -5 to 9 to prevent the parameters from drifting into extreme values with nil gradient. The age-based selectivity is calculated as \(a = 1\) to \(a = Amax + 1\):

\[S_a = \frac{1}{1+exp(-(p_{a+1} + (9 - max(p_a))))}\]

8.9.3.8 Pattern 17 (age) - Random Walk


This selectivity pattern provides for a random walk in ln(selectivity). For each age \(a \geq A_{\text{min}}\), where \(A_{\text{min}}\) is the minimum age for which selectivity is allowed to be non-zero, there is a selectivity parameter, \(p_a\), controlling the changing selectivity from age \(a-1\) to age \(a\).

The selectivity at age \(a\) is computed as:

\[S_a = \exp (S'_a - S'_{\text{max}}),\] where

\[S'_a = \sum_{i = a_{\text{min}}}^A p_i\] and

\[S'_{\text{max}} = \mbox{max} \{S'_a\}.\]

Selectivity is fixed at \(S_a = 0\) for \(a < A_{\text{min}}\).

This formulation has the properties that the maximum selectivity equals 1, positive values of \(p_a\) are associated with increasing selectivity between ages \(a-1\) and \(a\), and negative values are associated with decreasing selectivity between those ages and \(p_a = 0\) gives constant selectivity.

The condition that maximum selectivity equals 1 results in one fewer degree of freedom than the number of estimated \(p_a\). Therefore, at least one parameter should be fixed at an arbitrary value, typically \(p_{A_{\text{min}}}=0\).

The number of parameters lines required to the control file for pattern 17 is N ages + 1, unless a greater than zero value is included in the special column. If special is greater than 0, then special + 1 is the number of parameter lines needed in the control file. The value of special should be less than or equal to N ages. Input to the special column is used to reduce the number of parameters lines required (selectivity is constant above the age represented by the last parameter value read when using special).

In typical usage:

Code for implementing random walk selectivity can be found in SS_selex.tpl, search for “SS_Label_Info_22.7.17”.

8.9.3.9 Pattern 22 (size) - Double Normal Selectivity with Plateau


8.9.3.10 Pattern 23 (size), 24 (size), 2 (legacy), and 20 (age) - Double Normal Selectivity with Defined Peak and Tail Controls


Notes for Double Normal with Defined Peak and Tail Controls:

8.9.3.11 Pattern 15 (size or age) - Mirror Another Selectivity


No parameters. Whole age range is mirrored from another fleet. The mirrored selectivity pattern must be from a lower fleet number (e.g., already specified before the mirrored fleet).

8.9.3.12 Pattern 16 (age) - Gaussian Selectivity(similar to Coleraine)


8.9.3.13 Pattern 9 (size) and 19 (age) - Simple Double Logistic Selectivity


This pattern has 4 parameters and 2 fixed input values.

The shape of the selectivity is provided by the function (here in terms of age \(a\), but similarly applicable to length bin \(l\))

\[S'_a = \begin{cases} \hfil 0 & \text{if $a < p_5$,} \\ \left( \frac{1}{\exp\left(-p_2 \left( a - p_1 \right) \right) } \right) \left(1 - \frac{1}{\exp\left(-p_4 \left( a - [p_6 p_1 - p_3]\right) \right) } \right) & \text{if $a \geq p_5$.} \end{cases}\]

which is then rescaled by first adding a small constant to all values and then rescaling to have a maximum of 1.0:

\[S_a = (S'_a + 0.000001) / \max_{a'}\{S'_a + 0.000001\}\]

where

8.9.3.14 Pattern 21 (size) - Newer non-parametric selectivity


Non-parametric size selectivity which uses a set of linear segments like Pattern 6, but you have full control over the break points (as opposed to equidistant points over a specified range) and the parameters represent the absolute selectivity without the logistic transformation used in Pattern 6. This allows direct input of an empirical selectivity vector as fixed parameters if desired. The Special column in the selectivity setup specifies the number of break points where the required number of parameter lines is twice this value, with the first set of N parameters providing the length at each break point (which should be fixed via a negative phase) and the second set of N provides the absolute selectivity at each point.

All of the selectivity parameters must to be non-negative, having a lower bound at 0 (or higher). There is no rescaling to ensure a maximum of 1.0. Therefore it may make sense to have upper bounds at 1.0. Alternatively, a prior centered at 1.0 could be added to the parameter associated with a length bin which is expected to have peak selectivity. If the largest parameter is not fixed at 1.0 or estimated close to 1.0, then care should be taken in interpreting the associated fishing mortality values for the fleet in question.

Selectivity ramps linearly from the origin up to the first selectivity parameter at the first break point and is constant beyond the last break point.

See also Pattern 6 for an older non-parametric size selectivity option, and Pattern 43 which is based on Pattern 6 with additional controls over the scaling.

8.9.3.15 Pattern 25 (size) and 26 (age) - Exponential Logistic Selectivity


The exponential logistic included is based on the exponential logistic selectivity detailed by Thompson (1994); however, the parameterization within SS3 differs. Explorations using this selectivity form has shown that the estimation of p1 can be highly unstable. Users are strongly encouraged to use the double normal selectivity rather than the exponential logistic selectivity.

The exponential logistic selectivity is calculated as: \[peak = \text{min}(L_l) + p2(\text{max}(L_l) - \text{min}(L_l) )\] where \(L_l\) is the length bins at bin \(l\) (if age-based substitute with age bins) and: \[S_l = \frac{e^{p3*p1(peak-L_l)}}{1-p3(1-e^{p1(peak- L_l)})}\]

8.9.3.16 Pattern 27 (size or age) - Cubic Spline Selectivity


This selectivity pattern uses the admb implementation of the cubic spline function. This function requires input of the number of nodes, the positions of those nodes, the parameter values at those nodes, and the slope of the function at the first and last node.

An alternative to specifying or estimating the slope at the first and last nodes is to fix those values at 1e30 which will cause create a natural.cubic spline which allows the slope (first derivative) to be flexible but sets the curvature (second derivative) to be 0 at the first and last nodes.

The number of nodes is specified in the “special” column of the selectivity setup. The pattern number 27 is used to invoke cubic spline for size selectivity and for age selectivity; the input syntax is identical.

For a 3 node setup, the input parameters would be:

Notes:

Code for implementing cubic spline selectivity can be found in SS_selex.tpl, search for “SS_Label_Info_22.7.27”.

One potential problem that may occur with a cubic spline is a U-shaped pattern in the selectivity around the first node. If this occurs, the initial setup code (auto-generation options described below) can be changed from 0, 1 or 2 to 10, 11, or 12 which will cause selectivity to be fixed at 0.0 for all bins below the first node. A natural cubic spline (noted above) may be an alternative solution to this issue.

Auto-Generation of Cubic Spline Control File Setup: A new feature pioneered with the cubic spline function is a capability to produce more specific parameter labels and to auto-generate selectivity parameter setup. The auto-generation feature is controlled by the first selectivity parameter value for each fleet that is specified to use the cubic spline. There are 6 possible values for this setup parameter:

With either the auto-generate option 1, 2, 11, or 12, it still is necessary to include in the parameter file placeholder rows of values so that the init_matrix command can input the current number of values because all selectivity parameter lines are read as a single matrix with the dimension of N parameters \(\times\) 14 columns. The read values of min, max, initial, prior, prior type, prior standard deviation, and phase will be overwritten.

Cumulative size and age distribution is calculated for each fleet, summing across all samples and both sexes. These distributions are output in echoinput.sso and in a new OVERALL_COMPS section of report.sso.

When the nodes are auto-generated, the first node is placed at the size corresponding to the 2.5% percentile of the cumulative size distribution, the last is placed at the 97.5% percentile of the size distribution, and the remainder are placed at equally spaced percentiles along the cumulative size distribution. These calculated node values are output into control.ss_new. So, the user could extract these nodes from control.ss_new, edit them to desired values, then, insert them into the input control file. Remember to turn off auto-generation in the revised control file.

When the complete auto-generation is selected, the control.ss_new would look like the table below:

LO HI INIT <other entries> Block Fxn Parameter Label
0 2 2.0 ... 0 #SizeSpline Code
-0.001 1 0.13 ... 0 #SizeSpline GradLo
-1 0.001 -0.03 ... 0 #SizeSpline GradHi
11 95 38 ... 0 #SizeSpline Knot1
11 95 59 ... 0 #SizeSpline Knot2
11 95 74 ... 0 #SizeSpline Knot3
-9 7 -3 ... 0 #SizeSpline Value1
-9 7 -1 ... 0 #SizeSpline Value2
-9 7 -0.78 ... 0 #SizeSpline Value3

8.9.3.17 Pattern 41 (age) - Random Walk Selectivity with User-Defined Scaling


Selectivity pattern 17 with two additional parameters. The two additional parameters are the bin numbers to define the range of bins for scaling. All the selectivity values will be scaled (divided) by the mean value over this range. The low and high bin numbers are defined before the other selectivity parameters.

LO HI INIT <other entries> Block Fxn Parameter Label
0 20 10 ... 0 #AgeSel_ScaleAgeLo
0 20 20 ... 0 #AgeSel_ScaleAgeHi

8.9.3.18 Pattern 42 (size or age) - Cubic Spline Selectivity with User-Defined Scaling


Selectivity pattern 27 with two additional parameters. The two additional parameters are the bin numbers to define the range of bins for scaling. All the selectivity values will be scaled (divided) by the mean value over this range. The low and high bin numbers are defined before the other selectivity parameters.

LO HI INIT <other entries> Block Fxn Parameter Label
0 20 10 ... 0 #AgeSpline_ScaleAgeLo
0 20 20 ... 0 #AgeSpline_ScaleAgeHi

8.9.3.19 Pattern 43 (size) - Non-parametric Selectivity with User-Defined Scaling


Selectivity pattern 6 with two additional parameters. The two additional parameters are the bin numbers to define the range of bins for scaling. All the selectivity values will be scaled (divided) by the mean value over this range. The low and high bin numbers are defined before the other selectivity parameters.

LO HI INIT <other entries> Block Fxn Parameter Label
1 80 50 ... 0 #SizeSel_ScaleBinLo
1 80 70 ... 0 #SizeSel_ScaleBinHi

See also Pattern 21 for a newer non-parametric size selectivity option.

8.9.3.20 Pattern 44 (age) - Random Walk Selectivity with Separate Parameters for Males and Females


Similar to pattern 17 (random walk) but with separate parameters for males and females. This selectivity pattern provides for a random walk in ln(selectivity). In typical usage:

An example specification and setup for this selectivity option where selectivity is dome-shaped, peaking at age 2 with female and male selectivity are equal with 4 change points per sex:

#Pattern Discard Male Special
44 0 0 4
LO HI INIT <other entries> Block Fxn Parameter Label
0 20 0 ... 0 #first selex age
0 20 2 ... 0 #first age peak selex (mean)
0 20 2 ... 0 #last age peak selex (mean)
-1 2 -0.001 ... 0 #male ln(ratio)
-10 10 3.01 ... 0 #female ln(selex) change 1
-10 10 1.56 ... 0 #female ln(selex) change 2
-10 10 -0.15 ... 0 #female ln(selex) change 3
-10 10 -0.15 ... 0 #female ln(selex) change 4
-1000 10 -1000 ... 0 #male ln(selex) change 1
-1000 10 -1000 ... 0 #male ln(selex) change 2
-1000 10 -1000 ... 0 #male ln(selex) change 3
-1000 10 -1000 ... 0 #male ln(selex) change 4

8.9.3.21 Pattern 45 (age) - Revise Age Selectivity with Separate Parameters for Males and Females


Similar to pattern 14 (revise age) but with separate parameters for males and females. Age-selectivity pattern 45 to allow selectivity-at-age to be the same as selectivity at the next younger age.

An example specification and setup for this selectivity option where selectivity is asymptotic, with female and male selectivity are equal with 4 change points per sex:

#Pattern Discard Male Special
45 0 0 3
LO HI INIT <other entries> Block Fxn Parameter Label
0 20 2 ... 0 #first selex age
0 20 5 ... 0 #first age peak selex (mean)
0 20 5 ... 0 #last age peak selex (mean)
-1 2 -0.001 ... 0 #male ln(ratio)
-10 10 -8.1 ... 0 #female ln(selex) change 1
-10 10 -4.1 ... 0 #female ln(selex) change 2
-10 10 -1.8 ... 0 #female ln(selex) change 3
-1000 10 -1000 ... 0 #male ln(selex) change 1
-1000 10 -1000 ... 0 #male ln(selex) change 2
-1000 10 -1000 ... 0 #male ln(selex) change 3

8.9.4 Retention

Retention is defined as a logistic function of size or age. It does not apply to surveys. Four parameters (for asymptotic retention) or seven parameters (for dome-shaped retention) are used:

\[\text{Retention}_l = \left(\frac{P3'}{1 + e^{\frac{-(L_l-(P1+P4*male))}{P2}}}\right)*\left(1 - \frac{1}{1 + e^{\frac{-(L_l-(P5+P7*male))}{P6}}}\right)\] where \(P3' = 1/(1+e^{-P3})\) is the asymptotic retention calculated from the \(P3\) parameter which is in logit space.

8.9.5 Discard Mortality

Discard mortality is defined as a logistic function of size such that mortality declines from 1.0 to an asymptotic level as fish get larger. It does not apply to surveys, and it does not affect the calculation of expected values for discard data. It is applied so that the total mortality rate is:

dead fish = selectivity * (retain + (1.0-retain)*discard mortality)

If discard mortality is 1.0, all selected fish are dead; if discard mortality is 0.0, only the retained fish are dead.

Four parameters are used:

Discard mortality is calculated as: \[\text{Discard Mortality}_l = 1 - \frac{1-P3}{1+e^{\frac{-(L_l-(P1+P4*male))}{P2}}}\]

8.9.6 Sex-Specific Selectivity

There are two approaches to specifying sex specific selectivity. One approach allows male selectivity to be specified as a fraction of female selectivity (or vice versa). This first approach can be used for any selectivity pattern. The other option allows for separate selectivity parameters for each sex plus an additional parameter to define the scaling of one sex’s peak selectivity relative to the other sex’s peak. This second approach has only been implemented for a few selectivity patterns.

8.9.6.1 Male Selectivity as a Fraction of Female Selectivity (or vice versa):


If the “domale” flag is set to 1, then the selectivity parameters define female selectivity and the offset defined below sets male selectivity relative to female selectivity. The two sexes switch roles if the “domale” flag is set to 2. Generally it is best to select the option so that the dependent sex has lower selectivity, thus obviating the need to rescale for selectivities that are greater than 1.0. Sex specific selectivity is done the same way for all size and age selectivity options.

For intermediate ages, the natural log values are linearly interpolated on size (age).

If selectivity for the dependent sex is greater than the selectivity for the first sex (which always peaks at 1.0), then the male-female selectivity matrix is rescaled to have a maximum of 1.0.

8.9.6.2 Male Selectivity Estimated as Offsets from Female Selectivity (or vice versa):


A new sex selectivity option (3 or 4) has been implemented for size selectivity patterns 1 (logistic) and 23 and 24 (double normal) or age selectivity pattern 20 (double normal age). Rather than calculate male selectivity as an offset from female selectivity, here the male selectivity is calculated by making the male parameters an offset from the female parameters (option 3), or females are offset from males with option 4. The description below applies to option 3. If the size selectivity pattern is 1 (logistic), then read 3 parameters:

If the size selectivity pattern is 2, 20, 23 or 24 (double normal), then:

Notes:

8.9.7 Dirichlet-multinomial Error for Data Weighting

If the Dirichlet-multinomial error distribution was selected in the data file for length or age data weighting, add additional parameter line(s) immediately following the age selectivity parameter block. There should be 1 parameter line for each parameter in the data file.

For additional information about the Dirichlet-multinomial please see Thorson et al. (2017) and the detailed Data Weighting section.

The list of parameters would be something like:
LO HI INIT <other entries> Block Fxn Parameter Label
-5 10 0.5 ... 0 #ln(DM theta) Age or Length 1
-5 10 0.5 ... 0 #ln(DM theta) Age or Length 2

8.9.8 Selectivity Time-Varying Parameters

Time-Varying selectivity can be used. Details on how to specify time-varying parameters can be found in the Time-Varying Parameter Specification and Setup section.

8.9.9 Two-Dimensional Auto-Regressive Selectivity (2DAR; Semi-parametric selectivity)

This features combines a baseline parametric selectivity per existing options with a matrix of auto-correlated deviations by year and age (or length) to achieve semi-parametric selectivity as described in Xu et al. (2019). For brevity, only ages will be referred to here. With 2DAR, these deviations are not in the selectivity parameters themselves. Instead, the deviations are exponentiated then used as year and age-specific multipliers on the chosen baseline selectivity.

There are a range of controls for this feature. The deviations can be applied to either the age selectivity or the length selectivity over a specified range of ages (or lengths) and years. The variance of the devs can be the same for all ages, or can be age-specific up to a specified age, then constant variance for all older ages that are within the 2DAR range. Ages outside the 2DAR range revert to the parametric selectivity. The user can select a range of years for which devs will be created and can specify whether selectivity for years outside this range revert to the parametric selectivity or continue with the semi-parametric from the terminal year with devs; thus devs could be defined for just one year, the first year, then mirrored for all subsequent years. Finally, the controls allow for there to be a user-specified level of auto-correlation along the age dimension separate from a user-specified level of auto-correlation in the year dimension.

The 2DAR option has not yet been explored adequately to provide guidance on best practices. However, a preliminary guidance includes the following. When using this option for age-based selectivity, if there are not too many ages, a good choice for the baseline selectivity might be random walk selectivity (pattern 17) because it provides the most flexibility, allowing the 2DAR deviations to be used only for the annual deviations around this baseline rather than the account for misspecification of the underlying functional form. Otherwise, a simple parametric selectivity form like logistic or exponential logistic would be a reasonable choice. With regard to the variability, never estimate the sigma parameters as estimates will biased toward zero. Setting the sigma near 1.0 is advised as an initial default unless the user explicitly determines superior performance of a different value.

Typical Value Description and Options
1 Two-dimensional auto-regressive selectivity:
0 = Not used,
1 = Use 2DAR.
COND = 1 Then read a specification line for the first fleet that uses 2DAR, then any short parameter lines invoked by those specifications, then enter another specification for next fleet using 2DAR (if any) and its parameter lines, then finish with a specification line containing negative value for the fleet.

The specification line contains 11 values:

  1. Fleet: Fleet number to which semi-parametric deviations should be added.

  2. Ymin: First year with deviations.

  3. Ymax: Last year with deviations.

  4. Amin: First integer age (or population length bin index) with deviations.

  5. Amax: Last integer age (or population length bin index) with deviations.

  6. Sigma_Amax: the last age (or population length bin index) for which a separate sigma should be read. The number of sigma parameter lines to be read will be Sigma_Amax - Amin + 1. The feature allows for the expected situation in which annual variability in selectivity is higher for younger ages. For simplicity, only a single sigma parameter line needs to be read if Sigma_Amax equals Amin, or if Sigma_Amax < 0.

  7. Use Rho (0): Use autocorrelation parameters. The rho feature of 2DAR is not implemented and should not be used (value must be 0).

  8. Len(1) / Age(2): to specify whether the deviations should be applied to length- or age-based selectivity.

  9. Phase: Phase to begin estimation of the deviation parameters.

  10. Before Range: How should selectivity be modeled in the years prior to Ymin? Available options are (0) apply no deviations, (1) use deviations from the first year with deviations (Ymin), and (3) use average across all years with deviations (Ymin to Ymax).

  11. After Range: Similar to Before Range but defines how selectivity should be modeled after Ymax.

Following each fleet-specific specification line are short parameter lines for the standard deviation of the devs (sigma_selex).

  1. Short parameter line for sigma_selex at Amin;

  2. If Sigma_Amax > Amin, then additional parameter lines for each age up to Sigma_Amax

If multiple fleets are specified for 2DAR, the following lines are repeated for each fleet:
Sigma Use Len(1)/ Before After
Fleet Ymin Ymax Amin Amax Amax Rho Age(2) Phase Range Range
1 1979 2015 2 10 1 0 2 5 0 0
PRIOR PRIOR
LO HI INIT PRIOR sd TYPE PHASE LABEL
0 4 1 1 0.1 6 -4 #Sigma selex fleet 1, first age
0 4 1 1 0.1 6 -4 #Sigma selex fleet 1, second age
0 4 1 1 0.1 6 -4 #Sigma selex fleet 1,... age
# Additional fleets (e.g., fleet 2) with 2DAR selectivity
Sigma Use Len(1)/ Before After
Fleet Ymin Ymax Amin Amax Amax Rho Age(2) Phase Range Range
2 1979 2015 2 10 1 0 2 5 0 0
PRIOR PRIOR
LO HI INIT PRIOR sd TYPE PHASE LABEL
0 4 1 1 0.1 6 -4 #Sigma selex fleet 2, first age
0 4 1 1 0.1 6 -4 #Sigma selex fleet 2, second age
0 4 1 1 0.1 6 -4 #Sigma selex fleet 2,... age
# Terminator line of 11 in length indicates the end of parameter input lines
-9999 1 1 1 1 1 1 1 1 1 1

8.10 Tag Recapture Parameters

Specify if tagging data are being used:

Typical Value Description and Options
1 Tagging Data Present:
0 = No tagging data, or if tagging data is present in the data file, a value of 0
here will auto-generate the tag parameter section in the control.ss_new file.
1 = Read following lines of tagging data.
COND = 1 Read the following long parameter lines:
LO HI INIT PRIOR <other entries> Block Fxn Parameter Label
-10 10 9 9 ... 0 #TG loss init 1
-10 10 9 9 ... 0 #TG loss chronic 1
1 10 2 2 ... 0 #TG loss overdispersion 1
-10 10 9 9 ... 0 #TG report fleet 1
-4 0 0 0 ... 0 #TG report decay 1

If there are multiple tagging groups or multiple fleets reporting tagged fish the additional needed parameter lines should be entered in order for each parameter type (i.e., TG loss init 1, TG loss init 2, TG loss chronic 1, TG loss chronic 2,..., TG report decay 1, TG report decay 2).

Five parameter types are required for tagging data. Both the tag loss parameters and the reporting rate parameters are input as any real number and a logistic transformation is used to keep the resulting rates between 0 and 1:

The tagging reporting rate parameter is transformed during estimation to maintain a positive value and is reported according to the transformation: \[\text{Tagging Reporting Rate} = \frac{e^{\text{input parameter}}}{1+e^{\text{input parameter}}}\]

Currently, tag parameters cannot be time-varying.

A shortcoming was identified in the recapture calculations when using Pope’s \(F\) Method and multiple seasons in SS3 prior to v.3.30.14. The internal calculations were corrected in v.3.30.14. Now the Z-at-age is applied internally for calculations of fishing pressure on the population when using the Pope calculations.

8.10.0.1 Mirroring of Tagging Parameters


In v.3.30.14, the ability to mirror the tagging parameters from another tag group or fleet was added. With this approach, the user can have just one parameter value for each of the five tagging parameter types and mirror all other parameters. Note that parameter lines are still required for the mirrored parameters and only lower numbered parameters can be mirrored. Mirroring is evoked through the phase input in the tagging parameter section. The options are:

To avoid having to specify mirrored parameter lines, the tag parameters can be auto-generated. The control.ss_new file created after running this model will have a full set of tagging parameter lines to use in future model runs.

8.11 Variance Adjustment Factors

When doing iterative re-weighting of the input variance factors, it is convenient to do this in the control file, rather than the data file. This section creates that capability.

Read variance adjustment factors to be applied:
Factor Fleet Value Description
1 2 0.5 # Survey cv for survey/fleet 2
4 1 0.25 # Length data for fleet 1
4 2 0.75 # Length data for fleet 2
-9999 0 0 # End read

8.11.0.1 Additive Survey cv - Factor 1


While this functionality has been retained for backward compatibility with older model versions, this approach is no longer considered the best practice for tuning indices. Tuning indices should be conducted using the ability to estimate additional standard deviation which can be done in the Catchability setup.

The survey input variance (labeled survey cv) is actually the standard deviation of the ln(survey). The variance adjustment is added directly to this standard deviation. Set to 0.0 for no effect. Negative values are OK, but will crash if adjusted standard deviation value becomes negative.

8.11.0.2 Additive Discard - Factor 2


The input variance is the cv of the observation. Because this will cause observations of near zero discard to appear overly precise, the variance adjustment is added to the discard standard deviation, not to the cv. Set to 0.0 for no effect.

8.11.0.3 Additive Mean Body Weight - Factor 3


The input variance is in terms of the cv of the observation. Because such data are typically not very noisy, the variance adjustment is added to the cv and then multiplied by the observation to get the adjusted standard deviation of the observation.

8.11.0.4 Multiplicative Length Composition - Factor 4


The input variance is in terms of an effective sample size. The variance adjustment is multiplied times this sample size. Set variance adjustment to 1.0 for no effect.

8.11.0.5 Multiplicative Age Composition - Factor 5


Age composition is treated the same way as length composition.

8.11.0.6 Multiplicative Size-at-Age - Factor 6


Size-at-age input variance is the sample size for the N observations at each age. The variance adjustment is multiplied by these N values. Set to 1.0 for no effect.

8.11.0.7 Multiplicative Generalized Size Composition - Factor 7


Generalized size composition input variance is the sample size for each observation. The variance adjustment for each fleet is multiplied by these sample sizes. Set to 1.0 for no effect.

8.11.0.8 Variance Adjustment Usage Notes


The report.sso output file contains information in the “FIT_LEN_COMPS” and “FIT_AGE_COMPS” useful for determining if an adjustment of these input values is warranted to better match the scale of the average residual to the input variance scale.

Because the actual input variance factors are modified, it is these modified variance factors that are used when creating parametric bootstrap data files. So, the control files used to analyze bootstrap generated data files should have the variance adjustment factors reset to null levels.

8.12 Lambdas (Emphasis Factors)

These values are multiplied by the corresponding likelihood component to calculate the overall negative log likelihood to be minimized.

Typical Value Description and Options
4 Max lambda phase: read this number of lambda values for each element below. The last lambda value is used for all higher numbered phases.
1 sd offset:
0 = The ln(like) to omit the + ln(s) term,
1 = The ln(like) to include the ln(s) term for cpue, discard, growth cv, mean body weight, recruitment deviations. If you are estimating any variance parameters, sd offset must be set to 1.

8.12.0.1 Lambda Usage Notes


If the cv for size-at-age is being estimated and the model contains mean size-at-age data, then the flag for inclusion of the + ln(sd) term in the likelihood must be included. Otherwise, the model will always get a better fit to the mean size-at-age data by increasing the parameter for cv of size-at-age.

The reading of the lambda values has been substantially altered with v.3.30. Instead of reading a matrix containing all the needed lambda values, the model now just reads those elements that will be given a value other than 1.0. After reading the datafile, the model sets lambda equal to 0.0 if there are no data for a particular fleet/data type, and a value of 1.0 if data exist. So beware if your data files had data, but you had set the lambda to 0.0 in a previous version of SS3. First read an integer for the number of changes.

You can put any placeholder value like 0 or 999 for fleet if the likelihood component is not fleet specific (like recdevs).

You can also put any placeholder value like 0 or 999 for the SizeFreq Method unless the likelihood component you are changing the lambda for is 6 = size frequency, in which case you need to have a row for each size frequency method you want to modify and put the associated method number in that fourth column.

Read the lambda adjustments by fleet and data type:
Likelihood Lambda SizeFreq
Component Fleet Phase Value Method
1 2 2 1.5 1
4 2 2 10 1
10 0 2 1 0
#not_fleet_specific 6 2 2 1.5 1
#size_frequency_method_1 6 2 2 1 2
#size_frequency_method_2 4 2 3 0.2 1
-9999 1 1 1 1
The codes for component are:
1 = survey 10 = recruitment deviations
2 = discard 11 = parameter priors
3 = mean weight 12 = parameter deviations
4 = length 13 = crash penalty
5 = age 14 = morph composition
6 = size frequency 15 = tag composition
7 = size-at-age 16 = tag negative binomial
8 = catch 17 = \(F\) ballpark
9 = initial equilibrium catch (see note below) 18 = regime shift

Starting in v.3.30.16, the application of a lambda to initial equilibrium catch is now fleet specific. In previous versions, a single lambda was applied in the same manner across all fleets with an initial equilibrium catch specified.

8.13 Controls for Variance of Derived Quantities

Additional standard deviation reported may be selected:

Typical Value Description and Options
0 0 = No additional sd reporting;
1 = read specification for reporting sd for selectivity, size, numbers; and
2 = read specification for reporting sd for selectivity, size, numbers,
natural mortality, dynamic \(B_{0}\), and Summary Bio

COND = 1 or 2: Read the following lines (split into 3 rows for readability):

COND = 2, enter the above quantities plus (available in versions 3.30.15 and higher):

Depending upon the entered options above subsequent conditional inputs may be need.

Example Input:
2 # 0 = No additional sd reporting;
# 1 = read values below; and
# 2 = read specification for reporting sd for selectivity, size, numbers,
and natural mortality.
1 1 -1 5 # Selectivity
1 5 # Growth
1 -1 5 # Numbers-at-age
1 5 # M-at-age
1 # Dynamic Bzero
1 # Summary Biomass
5 15 25 35 38 # Vector with selectivity std bins (-1 in first bin to self-generate)
1 2 5 10 15 # Vector with growth std ages picks (-1 in first bin to self-generate)
1 2 5 10 15 # Vector with numbers-at-age std ages (-1 in first bin to self-generate)
1 2 5 10 15 # Vector with M-at-age std ages (-1 in first bin to self-generate)
999 #End of the control file input

9 Optional Inputs

9.1 Empirical Weight-at-Age (wtatage.ss)

The model has the capability to read empirical body weight-at-age for the population and each fleet, in lieu of generating these weights internally from the growth parameters, weight-at-length, and size-selectivity. Selection of this option is done by setting an explicit switch near the top of the control file. The values are read from a separate file named, wtatage.ss. This file is only required to exist if this option is selected.

The first value read is a single integer for the maximum age used in reading this file. So if the maximum age is 40, there will be 41 columns of weight-at-age entries to read, with the first column being for age 0. If the number of ages specified in this table is greater than maximum age in the model, the extra weight-at-age values are ignored. If the number of ages in this table is less than maximum age in the model, the weight-at-age data for the number of ages in the file is filled in for all unread ages out to maximum age.

The format of this input file is:

40 Maximum Age
Growth Birth
Year Season Sex Pattern Season Fleet Age-0 Age-1 ...
1 1 1 1 -2 0 0 0.1003
1 1 1 1 -1 0.0169 0.0864 0.2495
1 1 1 1 0 ... ... ...
1 1 1 1 1 ... ... ...
1 1 1 1 2 ... ... ...
-9999 1 1 1 1 0 ... ... ...

where:

9.1.0.1 Caveats


9.1.0.2 User Testing


9.2 runnumber.ss

This file contains a single integer value. It is read when the program starts, incremented by 1, used when processing the profile value inputs (see below), used as an identifier in the batch output, then saved with the incremented value. Note that this incrementation may not occur if a run crashes.

9.3 profilevalues.ss

This file contains information for changing the value of selected parameters for each run in a batch. In the control file, each parameter that will be subject to modification by profilevalues.ss is designated by setting its phase to -9999.

The first value in profilevalues.ss is the number of parameters to be batched. This value MUST match the number of parameters with phase set equal to -9999 in the control file. The program performs no checks for this equality. If the value is zero in the first field, then nothing else will be read. Otherwise, the model will read runnumber * Nparameters values and use the last Nparameters of these to replace the initial values of parameters designated with phase = -9999 in the control file.

Usage Note: If one of the batch runs crashes before saving the updated value of runnumber.ss, then the processing of the profilevalues.ss will not proceed as expected. Check the output carefully until a more robust procedure is developed. Also, this options was created before use of R became widespread. You probably can create a more flexible approach using R today.

10 Likelihood components

The objective function \(L\) is the weighted sum of the individual components indexed by kind of data \(i\), and fishery/survey \(f\) as appropriate: \[L = \sum_{i=1}^{I}\sum_{f=1}^{A_f}\lambda_{i,f} L_{i,f}+\lambda_R L_R + \sum_{\theta}^{}\lambda_\theta L_\theta + \sum_{P}^{}\lambda_P L_P + \lambda_{F_B} L_{F_B} + \lambda_{C_P} L_{C_P}\] where \(L\) is the total objective function, \(i\) is the index of the objective function component, \(A_f\) is the number of fleets, \(L_{i,f}\) is the objective function for data kind \(i\) for the fishery/survey \(f\), \(\lambda_{i,f}\) is a weighting factor for each objective function component, \(\theta\) is the parameter priors, and \(P\) is the random parameter deviations.

The components of the objective function based on the model set-up and data are:

Index Source Data/Parameter Error structure
\(i\) fishery/survey \(f\) cpue or Abundance index user choice
\(i\) fishery \(f\) Discard Biomass user choice
\(i\) fishery/survey \(f\) Mean body W or L (all ages) normal
\(i\) fishery/survey \(f\) Generalized size (W or L) composition multinomial or Dirichlet-multinomial
\(i\) fishery/survey \(f\) L Composition multinomial or Dirichlet-multinomial
\(i\) fishery/survey \(f\) Age Composition multinomial or Dirichlet-multinomial
\(i\) fishery/survey \(f\) Mean L (or W)-at-age normal
\(i\) fishery/survey \(f\) Tag-recapture 1 multinomial
\(i\) fishery/survey \(f\) Tag-recapture 2 negative binomial
\(i\) fishery \(f\) Initial equilibrium catch log-normal
\(i\) fishery \(f\) catch log-normal
\(R\) Recruitment Deviations log-normal
\(P\) Random parameter devs normal
\(\theta\) Parameter priors user choice
\(F_B\) \(F\) ballpark penalty
\(C_P\) Crash Penalty

11 Running Stock Synthesis

11.1 Command Line Interface

The name of the SS3 executable files often contains the phrase “safe” or “opt” (for optimized). The safe version includes checking for out of bounds values and should always be used whenever there is a change to the data file. The optimized version runs slightly faster but can result in data not being included in the model as intended if the safe version has not been run first. A file named ss3.exe is typically the safe version unless the result of renaming by the user.

On Mac and Linux computers, the executable does not include an extension (like .exe on Windows). Running the executable on from the DOS command line in Windows simply require typing the executable name (without the .exe extension):

	> ss3
	

On Mac and Linux computers, the executable name must be preceded by a period and slash (unless its location has been added to the user’s PATH). Note that the user may need to change permissions for Stock Synthesis to be executable before running SS3 for the first time:

	> chmod a+x ss3
	> ./ss3
	

An additional command has been added that allows users to specify the name of the .par file that is both read and output. Prior to v.3.30.22.1, the default exe name was ‘ss’ and the default .par file name was ss.par. The code now produces a ss3.par file by default instead of a ss.par file. The code will search for the default ss3.par file first, and then look for a ss.par file in order to have backwards compatibility and will by default output a ss3.par file (not a ss.par file). If you would like to read a differently named .par file and produce a .par file with the same name, you will need to add the modelname command. See the below example of using modelname to read and produce a .par file with the name ss4you.

	> ./ss3 modelname ss4you
	

Additional admb commands can follow the executable name, such as -nohess to avoid calculating the Hessian matrix. To see a full list of options, add -? after the executable name (with a space in between).

On all operating systems, a copy of the SS3 executable can either be located in the same directory as the model input files or in a central location and referenced either by adding it to the PATH or by a script files. Further discussion on script files for Windows is below.

Often there is a need to run the model with no estimation. Alternative methods to run SS3 without estimating parameters are documented in the Running Without Estimation section.

As of admb 12.3, a new command called -hess_step is available to use and documented in the Using -hess_step to do additional Newton steps using the inverse Hessian

11.1.1 Example of DOS batch input file

One file management approach is to put ss3.exe in its own folder (example: C:\SS3_model) and to put your input files in separate folder (example: C:\My Documents \SS3_runs). Then a DOS batch file in the SS3_runs folder can be run at the command line to start ss3.exe. All output will appear in SS3_runs folder.

A DOS batch file (e.g., SS3.bat) might contain some explicit admb commands, some implicit commands, and some DOS commands:

	c:\SS3_model\ss3.exe -cbs 5000000000 -gbs 50000000000 \%1 \%2 \%3 \%4 
	del ss.r0*
	del ss.p0*
	del ss.b0*
	

In this batch file, the -cbs and -gbs arguments allocate a large amount of memory for SS3 to use (you may need to edit these for your computer and SS3 configuration), and the %1, %2 etc., allows passing of command line arguments such as -nox or -nohess. You add more items to the list of % arguments as needed.

An easy way to start a command line in your current directory (SS3_runs) is to create a shortcut to the DOS command line prompt. The shortcut’s target would be:

	> %SystemRoot%\system32\cmd.exe
	

And it would start in:

	> %CURRDIR%
	

An alternative shortcut is to have the executable within the model folder then use Ctrl+Shift+Right Click and then select either “Open command window here”, depending upon your computer. From the command window the executable name can be typed along with additional inputs (e.g., -nohess) and the model run. If using the Powershell type cmd and then hit enter prior to calling the model (ss3).

11.1.2 Simple Batch

This first example relies upon having a set of prototype SS3 input files, where a starter file named starter.r01 can be renamed to starter.ss and then used in the SS3 run. The example also copies one of the output files, ss.std, to a new name, ss-std01.txt, to save it from being overwritten in subsequent runs. The example code should be put in a batch file, which can have any name with the .bat extension. Note that brief output from each run will be appended to cumreport.sso (see below).

	del ss.cor
	del ss.std
	copy starter.r01 starter.ss
	c:\admodel\ss3\ss3.exe -sdonly
	copy ss.std ss-std01.txt
	

The commands could be repeated, except the output should be copied to a different file, e.g., ss-std02.txt. This sequence can be repeated an unlimited number of times.

11.1.3 Complicated Batch

This second example processes 25 data files from a different directory, each time using the same control file. The loop index is used in the file names, and the output is searched for particular keywords to accumulate a few key results into the file SUMMARY.TXT. Comparable batch processing can be accomplished by using R or other script processing programs.

	del summary.txt
	del ss-report.txt
	copy /Y runnumber.zero runnumber.ss
	FOR /L \%\%i IN (1,1,25) DO (
	copy /Y ..\MakeData\A1-D1-%%i.dat  Asel.dat
	del ss.std
	del ss.cor
	del ss3.par
	c:\admodel\ss3\ss3.exe
	copy /Y ss3.par A1-D1-A1-%%i.par
	copy /Y ss.std A1-D1-A1-%%i.std
	find ``Number'' A1-D1-A1-%%i.par >> Summary.txt
	find ``hessian'' ss.cor >> Summary.txt)
	

11.1.4 Running Without Estimation

There may be time when users will want to run the model without parameter estimation. The admb command -noest will not work with Stock Synthesis, as it bypasses the procedure section. There are two suggested alternative approaches to do this with SS3 and admb.

The first approach requires the user to change the maximum phase value in the starter.ss file to 0 then running the model via the command widow as without calculating the hessian:

	ss3 -nohess
	

The second approach is done all through the command window using the following commands:

	ss3 -maxfn 0 -phase 50 -nohess
	

where -maxfn specifies the number of function calls and phase is the maximum phase for the model to start estimation where the number should be greater than the maximum phase for estimating parameters within the model.

However, the approaches above differ in subtle ways. First, if the maximum phase is set to 0 in the starter file the total likelihood will differ by a small amount (0.25 likelihood units) compared to the second approach which sets the -maxfn and -phase in the command window. This small difference is due a dummy parameter which is evaluated by the objective function when maximum phase in the starter is set to 0, resulting in a small contribution to the total likelihood of 0.25. However, all other likelihood components should not change.

The second difference between the two no estimation approaches is the reported number of “Active_count” of parameters in the Report file. If the command line approach is used (ss3 -maxfn 0 -phase 50 -nohess) then the active number of parameters will equal the number of parameters with positive phases, but because the model is started in a phase greater than the maximum phase in the model, these parameters do not move from the initial values in the control file (or the par file). The first approach where the maximum phase is changed in the starter file will report the number of “Active_count” parameters as 0.

The final thing to consider when running a model without estimation is whether you are starting from the par file or the control file. If you start from the par file (specified in the starter file: 1=use ss3.par) then all parameters, including parameter deviations, will be fixed at the estimated values. However, if the model is not run with the par file, any parameter deviations (e.g., recruitment deviations) will not be included in the model run (a user could paste in the estimated recruitment deviations into the control file).

11.1.4.1 Generate .ss_new files


There may be instances that a user would like to generate the .ss_new files without running the model, with or without estimation. There are two approaches that a user can take. The first is to manually change the maxphase in the starter.ss file to -1 and running the model as normal will generate these files without running through the model dynamics (e.g., no Report file will be created). The maxphase in the starter.ss_new file will be set to -1 and will need to be manually changed back if the intent is to replace the original (i.e., starter.ss) file with the new files (i.e., starter.ss_new). The second approach is to modify the maxphase via the command line input. Calling the model using the commands:

	ss3 -stopph -1
	

where -1 is the maximum phase for the model to run through (e.g., can be other values if a user would like to only run through a specific parameter phase). This approach will create all the new files with the starter.ss_new reflecting the original maxphase value in the starter.ss file. This approach is available in v.3.30.16 and later.

11.1.5 Using -hess_step to do additional Newton steps using the inverse Hessian

The optimizer in admb is designed to run until the maximum absolute gradient (mag) is small enough (e.g., 1e-05), after which it quits and does the uncertainty calculations. But if run for longer it cannot appreciably decrease this mag. In many cases it is interesting or advisable to get closer to the mode to confirm convergence of the model.

A new feature as of admb 12.3 called “-hess_step” takes Newton steps to update the mle using the information in the Hessian calculated as MLEnew = MLE-(inverse Hessian)*(gradient), where the Hessian and gradient are calculated from the original mle. If the mag improves then this corroborates the optimizer has converged and that the negative log likelihood surface is approximately quadratic at the mode as assumed in the asymptotic uncertainty calculations. The downside is the high computational cost due to the extra matrix calculations.

The feature is used by optimizing normally, and then from the command line running -hess_step for defaults (recommended), -hess_step N, or -hess_step_tol eps where N and eps are the maximum number of steps to take and the tolerance (i.e., a very small number like 1e-10) after which to stop. When running the Hessian first and then the -hess_step, admb will prompt you to run it with -binp ss.bar.

11.1.6 Running Parameter Profiles

Users will often want to run profiles over specific parameter to evaluate the information in the model to estimate the parameter based on changes in the log likelihood. There are two ways this can be done.

The first option is the use functions within r4ss to run the profile, summarize quantities across runs, and plot the output. The SS_profile() function will run the profile based on function inputs, SSgetoutput() will read quantities from each run Report file, SSsummarize() will summarize key model quantities, and the SSplotProfile() and PinerPlot() functions can be used to visualize results. Additional information regarding r4ss can be found in the r4ss section.

The second way is to create and run a batch file to profile over parameters. This example will run a profile on natural mortality and spawner-recruitment steepness, of course. Edit the control file so that the natural mortality parameter and steepness parameter lines have the phase set to -9999. Edit starter.ss to refer to this control file and the appropriate data file.

Create a profilevalues.ss file
2 # number of parameters using profile feature
0.16 # value for first selected parameter when runnumber equals 1
0.35 # value for second selected parameter when runnumber equals 1
0.16 # value for first selected parameter when runnumber equals 2
0.40 # value for second selected parameter when runnumber equals 2
0.18 # value for first selected parameter when runnumber equals 3
0.40 # value for second selected parameter when runnumber equals 3
etc.; make it as long as you like.

Create a batch file that looks something like this. Or make it more complicated as in the example above.

	del cumreport.sso
	copy /Y runnumber.zero runnumber.ss  % so you will start with runnumber=0 
	C:\SS330\ss3.exe 
	C:\SS330\ss3.exe 
	C:\SS330\ss3.exe 

Repeat as many times as you have set up conditions in the profilevalues.ss file. The summary results will all be collected in the cumreport.sso file. Each step of the profile will have a unique run number and its output will include the values of the natural mortality and steepness parameters for that run.

11.1.7 Re-Starting a Run

Model runs can be restarted from a previously estimated set of parameter values. In the starter.ss file, enter a value of 1 on the first numeric input line. This will cause the model to read the file ss3.par and use these parameter values in place of the initial values in the control file. This option only works if the number of parameters to be estimated in the new run is the same as the number of parameters in the previous run because only actively estimated parameters are saved to the file ss3.par. The file ss3.par can be edited with a text editor, so values can be changed and rows can be added or deleted. However, if the resulting number of elements does not match the setup in the control file, then unpredictable results will occur. Because ss3.par is a text file, the values stored in it will not give exactly the same initial results as the run just completed. To achieve greater numerical accuracy, the model can also restart from ss.bar which is the binary version of ss3.par. In order to do this, the user must make the change described above to the starter.ss file and must also enter -binp ss.bar as one of the command line options.

11.1.8 Optional Output Subfolders

As of v.3.30.19, users can optionally send .sso and .ss_new extension files to subfolders. To send files with a .sso extension to a subfolder within the model folder, create a subfolder called sso before running the model. To send files with a .ss_new extension to a separate subfolder, create a folder called ssnew before running the model.

11.2 Putting Stock Synthesis in your PATH

Instead of copying the SS3 executable to each model folder, SS3 can be put in your system path, which is a list of folders that your operating system looks in whenever you type the name of a program on the command line. This approach saves on storage space since the SS3 binary (i.e., the SS3 executable or exe) is about 2.2 MB and having it located in each folder can be prohibitive in a large-scale simulation testing study. Even if you are not running a large simulation study, putting SS3 in your path may still be convenient, as you can use the same executable on many models, there is no need to specify a full file path to the executable each time you run a model, and no need to create a batch file that refers to the executable’s location.

11.2.1 For Unix (OS X and Linux)

To check if SS3 is in your path, assuming the binary is named SS3: open a Terminal window and type which SS3 and hit enter. If you get nothing returned, then SS3 (named SS3 or SS3.exe) is not in your path. The easiest way to fix this is to move the SS3 binary to a folder that’s already in your path. To find existing path folders type echo $PATH in the terminal and hit enter. Now move the SS3 binary to one of these folders.

For example, in a Terminal window type:

    sudo cp ~/Downloads/SS3 /usr/bin/

to move a binary called SS3 from the “Downloads” folder to /usr/bin. You will need to use sudo and enter your password after to have permission to move a file to a folder like /usr/bin/, because doing so edits the system for other users also.

Also note that you may need to add executable permissions to the SS3 binary after downloading it. You can do that by switching to the folder where you placed the binary (cd /usr/bin/ if you followed the instructions above), and running the command:

    sudo chmod +x SS3

Check that SS3 is now executable and in your path:

    which SS3

If you followed the instructions above, you will see the following line returned:

    /usr/bin/SS3

If you have previously modified your path to add a non-standard location for the SS3 binary, you may need to also tell R about the new path. The path that R sees may not include additional paths that you have added through a configuration file like .bash_profile. If needed, you can add to the path that R sees by including a line like this in your .Rprofile file (.Rprofile is an invisible file in your home directory).

    Sys.setenv(PATH=paste(Sys.getenv(``PATH''),``/my/folder'',sep=``'':''))

11.2.2 For Windows

To check if SS3 is in your path for Windows, open a DOS prompt (Command Prompt) and type SS3 -? and hit enter. If the prompt returns a message like SS3 is not recognized..., then SS3 is not in your path (assuming the SS3 executable is called SS3.exe).

To add the SS3 binary file to your path, follow these steps:

  1. Find the correct version of the SS3.exe binary on your computer (or download from the SS3 releases).

  2. Move to and note the folder location. E.g., C:/SS3/

  3. Click on the start menu and type environment

  4. Choose Edit environment variables for your account under Control Panel

  5. Click on PATH if it exists, create it if it does not exist

  6. Choose ‘PATH‘ and click edit

  7. In the Edit User Variable window add to the end of the Variable value section a semicolon and the SS3 folder location you recorded earlier. E.g., ;C:/SS3. Do not overwrite what was previously in the PATH variable.

  8. Restart your computer

  9. Go back to the DOS prompt and try typing SS3 -? and hitting return again.

11.3 Running Stock Synthesis from R

Use system(“path/to/ss3”) to run Stock Synthesis from within the R console, where path/to/ss3 is the path to and name of the Stock Synthesis binary.

Alternatively, use the function run() from the r4ss package within the R console:

  # run model, in directory folder_1, using the SS3 executable
  # named ss3 that is in the path.
  r4ss::run(dir = ``folder_1'',
            exe = ``ss3'')

Running SS3 from within R may be desirable for setting up simulations where many runs of SS3 models are required (e.g., ss3sim) or if r4ss is already used to read model output. Additional information regarding r4ss can be found in the r4ss section.

11.4 The Stock Synthesis GUI (SSI)

ssi (a.k.a. the SS3 GUI) provides an interface for loading, editing, and running model files, and also can link to r4ss to generate plots. Note that ssi is not maintained for Stock Synthesis versions after v.3.30.21.

11.5 The Stock Assessment Continuum Tool

The Stock Assessment Continuum Tool (previously known as the Stock Synthesis Data-limited Tool) is a Shiny-based application that uses SS3 as the flexible framework to apply a variety of model types depending on the available data (catch time-series, age-composition, length-composition, abundance index data). It is meant to make SS3 accessible to users, open up many features and tools associated with SS3, provide an easy way to enter data in the model, and make model specification and uncertainty exploration easier.

11.6 Debugging Tips

When input files are causing the program to crash or fail to produce sensible results, there are a few steps that can be taken to diagnose the problem. Before trying the steps below, examine the echoinput.sso file. It is highly annotated, so you should be able to see if the model is interpreting your input files as you intended. Additionally, users should check the warning.sso file when attempting to debug a non-running model.

  1. Set the turn_off_phase switch to 0 in the starter.ss file. This will cause the mode to not attempt to adjust any parameters and simply converges a dummy parameter. It will still produce a Report.sso file, which can be examined to see what has been calculated from the initial parameter values.

  2. Turn the verbosity level to 2 in the starter.ss file. This will cause the program to display the value of each likelihood component to the screen on each iteration. So if the program is creating an illegal computation (e.g., divide by zero), it may show you which likelihood component contains the problematic calculation. If the program is producing a Report.sso file, you may then see which observation is causing the illegal calculation.

  3. Run the program with the command ss3 >>SSpipe.txt. This will cause all screen display to go to the specified text file (note, delete this file before running because it will be appended to). Examination of this file will show detailed statements produced during the reading and preprocessing of input files.

  4. If the model fails to achieve a proper Hessian it exits without writing the detailed outputs in the FINAL_SECTION. If this happens, you can do a run with the -nohess option so you can view the Report.sso to attempt to diagnose the problem.

  5. If the problem is with reading one or more of the input files, please note that certain Mac line endings cannot be read by the model (although this is a rare occurrence). Be sure to save the text files with Windows or Linux style line endings so that the executable can parse them.

11.7 Keyboard Tips

Typing “N” during a run will cause admb to immediately advance to the next phase of estimation.

Typing “Q” during a run will cause admb to immediately go to the final phase. This bypasses estimation of the Hessian and will produce all the model outputs, which are coded in the FINAL_SECTION.

11.8 Running MCMC

Running SS3 with mcmc can be done through command line options using the default admb mcmc algorithm (described below). Another possibility is using the R package adnuts. See the adnuts vignette for more information. The mcmc guide for admb provides the most comprehensive guidance available for using mcmc with admb models (such as SS3). Additional guidance is available in (Monnahan et al. 2019).

Running SS3 with mcmc (instead of mle) provides mpd estimates, report file, Hessian matrix and the .cor file. Parameters stuck on bounds which will degrade efficiency of mcmc implementation. Two commands are needed to obtain the model results:

Run SS3 with arguments -mcmc xxxx -mcsave yyyy

Run SS3 with argument -mceval to get more summaries

Note that when the model is switched to mcmc or MCEVAL mode, all the bias adjustment factors become 1.0 for any years with recruitment deviations. A report file is not created after completing mcmc because it would show values based only on the last mcmc step.

12 Output Files

When Stock Synthesis is run, numerous text files are produced in the same folder as the input files. Many of these are automatically created by ADMB and can be ignored. The files in the following table are typically used by SS3 users to see the model results, understand how SS3 is interpreting the inputs, and debug models.

Output with Results Output Mirroring Input Output for Debugging
Report.sso starter.ss_new warning.sso
ss_summary.sso control.ss_new echoinput.sso
CompReport.sso data_echo.ss_new ParmTrace.sso
covar.sso forecast.ss_new
Forecast-report.sso wtatage.ss_new
ss3.par

12.1 Main Output File, Report.sso

This is the primary output file. The file starts with KeyWords_of_tables_available_in_report_sso which is a list of tables organized by category indicating which tables are included in the Report file. The full list of tables (as of v.3.30.23) is listed below.

Report Number Report file notation r4ss notation Notes
Description (if needed)
Report Number Report file notation r4ss notation Notes
Description (if needed)
1 DEFINITIONS Split into variables: $nseasons, $seasfracs, $seasdurations, and $FleetNames List of definitions (e.g., fleet names, model start year) assigned in the data and control files.
2 LIKELIHOOD $likelihoods_used and $likelihoods_by_fleet Final values of the negative log likelihood are presented.
3 Input_Variance_Adjustment NA The matrix of input variance adjustments is output here because these values affect the log likelihood calculations.
4 Parm_devs_detail $Parm_devs_detail Details about parameter deviations, if used in the model. Will be missing if no parameter deviations were used. Shows controlling parameters sd and Rho as well as statistics of time series of deviations. The statistics include the mean, rmse, var, est_rho (estimated autocorrelation), and D-W (Durbin-Watson) statistic.
Parameter deviations detail
5 PARAMETERS $parameters The parameters are listed here. For the estimated parameters, the display shows: Num (count of parameters), Label (as internally generated by SS3), Value, Active_Cnt, Phase, Min, Max, Init, Prior, Prior_type, Prior_SD, Prior_Like, Parm_StD (standard deviation of parameter as calculated from inverse Hessian), Status (e.g., near bound), and Pr_atMin (value of prior penalty if parameter was near bound). The Active_Cnt entry is a count of the parameters in the same order they appear in the ss.cor file. Also included are values derived from the prior distribution (Value-1.96*sd, Value+1.96*sd, V_1%, V_10%, etc.).
6 DERIVED_QUANTITIES $derived_quants

This section starts by showing the options selected from the starter.ss and textttforecast.ss input files:

  • spr ratio basis,

  • \(F\) report basis,

  • B ratio denominator

Then the time series of output, with standard deviation of estimates, are produced with internally generated labels. Note that these time series extend through the forecast era. The order of the output is: spawning biomass, recruitment, SPRratio, Fratio, Bratio, management quantities, forecast catch (as a target level), forecast catch as a limit level (ofl), Selex_std, Grow_std, NatAge_std. For the three “ratio” quantities, there is an additional column of output showing a Z-score calculation of the probability that the ratio differs from 1.0. The “management quantities” section is designed to meet the terms of reference for west coast groundfish assessments; other formats could be made available upon request. The standard deviation quantities at the end are set up according to specifications at the end of the control input file. In some cases, a user may specify that no derived quantity output of a certain type be produced. In those cases, SS3 substitutes a repeat output of the virgin spawning biomass so that vectors of null length are not created.

7 MGparm_By_Year_after _adjustments $MGparmAdj This block shows the time series of mortality and growth parameters by year after adjustments by environmental links, blocks and deviations.
Mortality and growth parameters by year after adjustments
selparm(Size)_By_Year _after_adjustments $SelSizeAdj This block shows the size selectivity parameters by year after adjustments by environmental links, blocks, and deviations
Selectivity parameters (size) by year after adjustments
9 selparm(Age)_By_Year _after_adjustments $SelAgeAdj This block shows the age selectivity parameters by year after adjustments by environmental links, blocks, and deviations.
Selectivity parameters (age) by year after adjustments
10 RECRUITMENT_DIST

[-0.25cm]

$recruitment_dist (a list containing $recruit_dist, $recruit_dist_Bmark, and $recruit_dist_endyr

This block shows the distribution of recruitment across growth patterns, sexes, settlement events, and areas in the end year of the model.
Recruitment distribution
11 MORPH_INDEXING $morph_indexing This block shows the internal index values for various quantities. It can be a useful reference for complex model setups. Bio_Pattern refers to a collection of cohorts with the same defined growth and natural mortality parameters; sex is the next main index. If recruitment occurs in multiple events, then settlement event is the index for that factor. The index labeled “Platoon” is used as a continuous index across all the other factor-specific indices. If sub-platoons are used, they are nested within the Bio_Pattern \(\times\) Sex \(\times\) Birth Season platoon. However, some output tables use the column label “platoon” as a continuous index across platoons and sub-platoons. Note that there is no index here for area. Each of the cohorts is distributed across areas and they retain their biological characteristics as they move among areas.
Growth morph indexing
12 SIZEFREQ_TRANSLATION

[-0.25cm]

NA (only present when using generalized size frequency data)

If the generalized size frequency approach is used, this block shows the translation probabilities between population length bins and the units of the defined size frequency method. If the method uses body weight as the accumulator, then output is in corresponding units.
Size frequency translation
13 MOVEMENT $movement Movement rate between areas in a multi-area model.
14 EXPLOITATION $exploitation Time series of the selected \(F\text{\_std}\) unit and the \(F\) for each fleet in terms of harvest rate (if Pope’s approximation is used) or fully selected \(F\). Also included are \(\text{annual\_}F\) and \(\text{annual\_}M\) from \(Z = ln(N_{t+1}/N_{t})\) and \(F = Z-M\).
15 CATCH $catch Observed and expected catch and \(F\). Also vuln_bio, sel_bio, dead_bio, ret_bio, vuln_num, sel_num, dead_num, and ret_num by fleet, area, year, and season
16 TIME_SERIES $timeseries Biomass and recruitment by area, year, and season. Also has the same (e.g. redundant) detailed catch output as in the CATCH table.
17 SPR_SERIES $sprseries Table shows per recruit quantities (e.g. equilibrium) using the current year’s biology-at-age and fishery characteristics and levels.
18 Kobe_plot $Kobe (also $Kobe_warn and $Kobe_MSY_basis) Reports \(B/B_{MSY}\) and \(F/F_{MSY}\) needed to create a Kobe Plot.
19 SPAWN_RECRUIT $recruit Extensive information on Spawn-recruit setup, summary statistics, and time series. The summary statistics include the est_rho (estimated autocorrelation) and the D-W (Durbin-Watson) statistic.
Spawn recruit parameters and table
20 SPAWN_RECR_CURVE $SPAWN_RECR_CURVE A table containing values for plotting the spawn-recruit curve.
Spawn recruit curve
21 INDEX_1 $index_variance_tuning _check Table shows summary statistics for the fit to each survey or index, including the rmse of the fit to each index compared to the mean input error level to assist the user in gauging the goodness-of-fit and potentially adjusting the input level of imprecision.
22 INDEX_2 $cpue This section reports the observed and expected values for each survey, cpue, or other index. All are reported in one list with index number included as a selection field.
Survey+
23 INDEX_3 NA Parameter number for each catchability parameter. The first column is the base parameter, the second is the extra sd parameter, the third is environmental link, and the fourth is the block or trend parameter.
24 DISCARD_SPECIFICATION $discard_spec Information on discard units and error type for each fleet with discards.
25 DISCARD_OUTPUT $discard This is the list of observed and expected values for the amount (or fraction) discard.
26 MEAN_BODY_WT_OUTPUT $mean_body_wt This is the list of observed and expected values for the mean body weight (of all selected sizes of fish).
Mean body weight
27 FIT_LEN_COMPS

[-0.25cm]

$len_comp_fit_table and $Len_Comp_Fit_Summary

List of the goodness-of-fit to each length composition observation. The input and output levels of effective sample size are shown as a guide to adjusting the input levels to better match the model’s ability to replicate these observations. Also shown are the observed and expected mean length for females, males, and combined. Below the list are summary statistics for each fleet.
Fit length compositions
28 FIT_AGE_COMPS

[-0.25cm]

$age_comp_fit_table and $Age_Comp_Fit_Summary

This age composition information has the same format as the length composition section.
Fit age compositions
29 FIT_SIZE_COMPS

[-0.25cm]

$size_comp_fit_table and $Size_Comp_Fit_Summary

This generalized size composition section has the same format as the length composition section.
Fit size compositions
30 OVERALL_COMPS NA This is the unweighted average of composition data across all the samples for a given fleet. Cumulative vector is used for auto-placing knots of cubic spline selectivity.
Overall compositions
31 LEN_SELEX

[-0.25cm]

$sizeselex

Length selectivity and other length specific quantities for each fishery and survey.
Length Selectivity
32 AGE_SELEX $ageselex Time series of age selectivity and other age-related quantities for each fishery and survey. Some are directly computed in terms of age, and others are derived from the combination of a length-based factor and the distribution of size-at-age. Includes F-at-age for fishing fleets and body weight-at-age.
Age Selectivity
33 ENVIRONMENTAL_DATA NA The input values of environmental data are echoed here. Density-dependence can be used by linking to population quantities that have already been calculated at the start of the year. These include summary biomass, spawning biomass, and recruitment deviations. These three quantities are mapped into the -1, -2, and -3 columns of the environmental data matrix where they can be used as if there were environmental data input.
34 TAG_Recapture

[-0.25cm]

$tagrelease, #tagsalive, $tagtotrecap, $tagfirstperiod, $tagaccumperiod, and $tagreportrates

Multiple tables of information on tagged fish (if included in the model) including details on each tag group, tags alive by release group and period, total recaptures by release group and period, and reporting rates by fishery.
Tag recapture information
35 NUMBERS_AT_AGE $natage The abundance at age (in thousands of fish) for each area, year, and season is shown for each cohort (labelled Morph) tracked in the model. Biological identity of each Morph is in other columns. \(B\) is beginning of the season and \(M\) is the midpoint of the season.
36 BIOMASS_AT_AGE $batage The abundance at age (in metric tons) for each area, year, and season. Formatted the same way as the numbers-at-age table.
37 NUMBERS_AT_LENGTH $natlen The output is shown for each cohort tracked in the model.
38 BIOMASS_AT_LENGTH $batlen The output is shown for each cohort tracked in the model.
39 F_AT_AGE $fatage Dedicated table for the time series of F-at-age for each fleet. Redundant with the F-at-age shown in the Age_selex table.
40 CATCH_AT_AGE $catage The output is shown for each fleet. It is not necessary to show by area because each fleet operates in only one area.
41 DISCARD_AT_AGE $discard_at_age Similar to the catch-at-age report.
42 BIOLOGY $biology The first biology section shows the length-specific quantities in the ending year of the time series only. The derived quantity spawn is the product of female body weight, maturity and fecundity per weight.
43 Natural_Mortality $Natural_Mortality Time series of natural mortality by area, year, and season. Includes \(M2\) in models with predators.
44 AGE_SPECIFIC_K NA Values for age-specific von Bertalanffy \(k\) values for each Bio_Pattern and sex (if implemented in the model via growth options 3–5.)
45 Growth_Parameters $Growth_Parameters This section shows the growth parameters and associated derived quantities for each year in which a change is estimated. This information overlaps with that in the MGPARM_by_year report.
46 Seas_Effects $Seas_Effects Table with seasonal effects on biology.
Seasonal effects
47 Biology_at_age_in_endyr $endgrowth This section shows derived size-at-age and other quantities. As of v.3.30.21 sex ratio is reported by area in this output table.
48 MEAN_BODY_WT(Begin) $mean_body_wt This section reports the time series of mean body weight for each morph. Values shown are for the beginning of each season of each year. Not included are the morph descriptors as found in report 35.
Mean body weight (start)
49 MEAN_SIZE_TIMESERIES $growthseries This section shows the time series of mean length-at-age for each morph. At the bottom is the average mean size as the weighted average across all morphs for each sex.
50 AGE_LENGTH_KEY $ALK This is reported for values in endyear. Table values for each sub-season (Subseas) of each season. Subseas = 1 is the beginning of the season and subseas = 2 is the midpoint of the season. There are only 2 sub-seasons.
51 AGE_AGE_KEY $AAK This is the calculated distribution of observed ages for each true age for each of the defined ageing keys.
52 COMPOSITION_DATABASE Separate variables by data type: $lendbase, $sizedbase, $agebase, $condbase, $ghostagedbase, $ghostcondbase, $ghostlendbase, $ladbase, $wadbase, $tagbase1, $tagbase2, and $morphcompdbase Written to a separate file, CompReport.sso. Contains the length composition, age composition, and mean size-at-age observed and expected values. It is arranged in a database format, rather than an array of vectors which enables the use of dynamic filtering when using a spreadsheet program.
53 SELEX_database NA This section contains the selectivities organized as a database, rather than as a set of vectors.
Selectivity database
54 SPR/YPR_Profile $equil_yield This table displays per recruit quantities across a range of \(F\) levels and using the biology- and selectivity-at-age from the benchmark(a.k.a. reference point) calculations. The SPRloop is various ways of defining a range of iterations to get good coverage.
55 GLOBAL_MSY NA This table show msy calculations using the actual fishery selectivity and by replacing with a range of knife-edge and per-age alternatives.
56 SS_summary.sso Read by SS_read_summary() See Stock Synthesis Summary section.
57 rebuilder.sso NA Specific to the U.S. West Coast rebuilding plans. See the Pacific Fishery Management Council rebuilder code on GitHub related to this output.
58 SIStable.sso NA See SIS table section.
59 Dynamic_Bzero $Dynamic_Bzero Dynamic \(B_{0}\) report
60 wtatage.ss_new $wtatage Contains mean weight-at-age for the population and each fleet as well as maturity \(\times\) fecundity. For models that use the wtatage.ss file as an input, the values will match. For models with parametric growth, the values will be based on internal calculations of mean weight-at-age. SS3 generates a wtatage.ss_new file even when these biological quantities are generated internally from parameters.
61 ANNUAL_TIME_SERIES $annual_time_series Table shows the annual abundance at the beginning of each year (collapses all areas) and the total catch across all fleets.

12.2 Custom Reporting

Additional user control for what is included in the Report.sso file was added in v.3.30.16. This approach allows for full customizing of what is printed to the Report file by selecting custom reporting (option = 3) in the starter file where specific items now can be included or excluded depending upon a list passed to SS3 from the starter file. The “Report Number” in the table above is used to select which outputs are included when using this option.

12.3 Standard ADMB output files

Standard admb files are created by SS3. These are:

ss3.par (previously ss.par) - This file has the final parameter values. They are listed in the order they are declared in SS3. This file can be read back into SS3 to restart a run with these values (see Running Stock Synthesis for more info).

ss.std - This file has the parameter values and their estimated standard deviation for those parameters that were active during the model run. It also contains the derived quantities declared as standard deviation report variables. All of this information is also reported in the covar.sso. Also, the parameter section of Report.sso lists all the parameters with their SS3 generated names, denotes which were active in the reported run, displays the parameter standard deviations, then displays the derived quantities with their standard deviations.

ss.rep - This report file is created between phases so, unlike Report.sso, will be created even if the hessian fails. It does not contain as much output as shown in Report.sso.

ss.cor - This is the standard admb report for parameter and standard deviation report correlations. It is in matrix form and challenging to interpret. This same information is reported in covar.sso.

12.4 Stock Synthesis Summary

The ss_summary.sso file (available for versions 3.30.08.03 and later) is designed to put key model outputs all in one concise place. It is organized as a list. At the top of the file are descriptors, followed by the 1) likelihoods for each component, 2) parameters and their standard errors, and 3) derived quantities and their standard errors. Total biomass, summary biomass, and catch were added to the quantities reported in this file in v.3.30.11 and later.

Before v.3.30.17, TotBio and SmryBio did not always match values reported in columns of the TIME_SERIES table of Report.sso. The report file should be used instead of ss_summary.sso for correct calculation of these quantities before v.3.30.17. Care should be taken when using the TotBio and SmryBio if the model configuration has recruitment after January 1 or in a later season, as TotBio and SmryBio quantities are always calculated on January 1. Consult the detailed age-, area-, and season-specific tables in Report.sso for calculations done at times other than January 1.

12.5 SIS table

The SIS_table.sso is deprecated as of v.3.30.17. Please use the r4ss function get_SIS_info() to produce output formatted for reading into the nmfs sis database.

12.6 Derived Quantities

Before listing the derived quantities reported to the standard deviation report, there are a couple of topics that deserve further explanation.

12.6.1 Virgin Spawning Biomass vs. Unfished Spawning Biomass

Unfished is the condition for which reference points (benchmark) are calculated. Virgin Spawning Biomass is the initial condition on which the start of the time series depends. If biology or spawner-recruitment parameters are time-varying, then the benchmark year input in the forecast file tells the model which years to average in order to calculate “unfished”. In this case, virgin recruitment and/or the virgin spawning biomass will differ from their unfished counterparts. Virgin recruitment and spawning biomass are reported in the mgmt_quant portion of the sd_report and are now labeled as “unfished” for clarity. Note that if \(ln(R_{0})\) is time-varying, then this will cause unfished to differ from virgin. However, if regime shift parameter is time-varying, then unfished will remain the same as virgin because the regime shift is treated as a temporary offset from virgin. Virgin spawning biomass is denoted as SSB_virgin and spawning biomass unfished is denoted as SSB_unfished in the report file.

Virgin Spawning Biomass is used in four ways within SS3:

  1. Anchor for the spawner-recruitment relationship as virgin spawning biomass.

  2. Basis for the initial equilibrium abundance.

  3. Basis against which annual depletion is calculated.

  4. Benchmark calculations.

However, if there is time-varying biology, then the 4th usage can have a different Virgin Spawning Biomass calculation compared to the other usages.

12.6.2 Metrics for Fishing Mortality

12.6.2.1 Annual Fishing Intensity


Fishery management systems expect to have a measure of annual fishing mortality that describes the intensity of the fishery such that an optimal level of \(F\) can be articulated, and accountability measures can be invoked if \(F\) is too high, e.g., overfishing. This concept is simple and straightforward if the model is a simple biomass dynamics such that a single annual \(F\) value operates on the entirety of a non-age structured population. It also is simple for age-structured models that have a single fishing fleet and knife-edge selectivity beginning at some specified age.

The simplicity of \(F\) disappears quickly as models invoke a variety of realistic complexities such as: allowing the \(F\) to differ among ages or to be based on size; using a collection of fleets with different \(F\) levels and different age patterns for \(F\); spreading the population across areas and allowing different fleets with different \(F\) among the areas. An unambiguous measure of annual fishing intensity that represents the cumulative effect of all that complexity has not been defined. This problem has not been solved with SS3, but some logical alternatives have been made available. Two complementary approaches are reported: spr shows the long-term effect of fishing on the stock’s spawning potential, \(F\text{\_std}\) shows a statistic related to the fraction of the population removed each year. The various options for both of these are found in the starter.ss file and described in that section of this manual. \(F\text{std}_y\) is reported in the derived quantities, so variance is calculated for this quantity.

12.6.2.2 Additional \(F\) output


The \(F\) statistics are displayed most explicitly in the Report.sso table “EXPLOITATION report:14”. There, the table columns are:

	Yr Seas Seas\_dur F\_std annual\_F annual\_M <each fleet's F\_scalar>
	

In this table, the displayed value for \(\text{annual\_}F\) will be from the \(F=Z-M\) method regardless of which option was chosen for \(F\text{\_std}\). If \(F\text{\_std}\) uses option 4 or 5, then the \(\text{annual\_}F\) will use the same range of ages. Otherwise, \(\text{annual\_}F\) will be for the age that is the mid-age of the age range of the model.

12.6.2.3 \(F\text{-at-Age}\)


\(F\text{-at-Age}\) for each fleet is presented in Report.sso as table 39. Its header looks like:

	F\_AT\_AGE report:39
	Area Fleet Sex Morph Yr Seas Era 0 1 2
	

In addition to the \(F\_std\) and \(\text{annual\_}F\) outputs, info on total \(F\text{-at-Age}\) across all fleets is reported at the end of the Report.sso file. This section of the report calculates \(Z\text{-at-Age}\) as \(ln(N_{a+1,t+1}/N_{a,t})\). This is done for numbers at the beginning of each year and the N values are summed over all areas. It is done once using the fishing intensities as estimated (to get \(Z\text{-at-Age}\)), and once with the \(F\) intensities set to 0.0 to get \(M\text{-at-Age}\). This latter sequence also provides a measure of dynamic Virgin Spawning Biomass. The user can then subtract the table of \(M\text{-at-Age/year}\) from the table of \(Z\text{-at-Age/year}\) to get a table of \(F\text{-at-Age/year}\). From this \(\text{apical\_}F\), average \(F\) over a range of ages, or other user-desired statistics could be calculated. The header for this table looks like:

	Z\_AT\_AGE\_Annual\_2 With\_fishery
	Bio\_Pattern Sex Yr  0 1 2
	

A more detailed report provides \(Z\) for each area, bio group (sex, morph, platoon) and age. It is titled something like “Report_Z_by_area_morph_platoon_1 No_fishery”. This table is done with and without the \(F\) turned on, so that pair pf reports could be processed to get total \(F\) by area, bio, age.

12.6.2.4 Relative \(F\) and \(F\text{mult}\)


The \(F'\) is fleet-specific, so it is useful to have a concept of relative \(F\), \(\text{rel}F_f\), among fleets. In SS3, \(\text{rel}F_f= F_{t,f}'/\sum_{f}^{}F_{t,f}'\) for a single time period \(t\). In the benchmark and forecast routines, SS3 can calculate \(\text{rel}F_f\) using \(F_{t,f}'\) over a range of years, or the user can input custom \(\text{rel}F\) values for benchmark and forecast in the forecast.ss file. Note that in a multi-season model setup, \(\text{rel}F_f\) is implemented as \(\text{rel}F_{s,f}\) where \(s\) is the season. These get multiplied by season duration as they are used.

In the benchmark section of the code, SS3 searches for an \(F\text{mult}\) to achieve various management reference points (often referred to as benchmarks). In this search, SS3 calculates a benchmark \(F\) as \(F_{ben,f}' = F\text{mult} * \text{rel}F_f\), then calculates equilibrium yield and spawning biomass per recruit (spr). SS3 searches for the \(F\text{mult}\) that satisfies the search conditions, first for user-specified spr, then for user-specified spawning biomass at a management target (\(B_{TARGET}\)) or \(F_{0.1}\)), then for msy. The resultant benchmark quantities are reported in the derived quantities, but \(F\text{mult}\) and \(F_{ben,f}'\) are only reported in the Forecast_report.sso file. SS3 stores the benchmark \(F\text{mult}\) values so that user can invoke them for the forecast.


Below is a list of items to consider in terms of units for \(F\) in SS3:

12.6.3 MSY and other Benchmark Items

The following quantities are included in the sdreport vector mgmt_quantities, so obtain estimates of variance. Some additional quantities can be found in the benchmarks section of the forecast_report.sso.

Benchmark Item Description
Benchmark Item Description
SSB_Unfished Unfished reproductive potential (glsssb is commonly female mature spawning biomass).
TotBio_Unfished Total age 0+ biomass on January 1.
SmryBio_Unfished Biomass for ages at or above the summary age on January 1.
Recr_Unfished Unfished recruitment.
SSB_Btgt ssb at user specified ssb target.
SPR_Btgt spr at \(F\) intensity that produces user specified ssb target.
Fstd_Btgt \(F\) statistic at \(F\) intensity that produces user specified ssb target.
TotYield_Btgt Total yield at \(F\) intensity that produces user specified ssb target.
SSB_SPRtgt ssb at user specified spr target (but taking into account the spawner-recruitment relationship).
Fstd_SPRtgt \(F\) intensity that produces user specified spr target.
TotYield_SPRtgt Total yield at \(F\) intensity that produces user specified spr target.
SSB_MSY ssb at \(F\) intensity that is associated with msy; this \(F\) intensity may be directly calculated to produce msy, or can be mapped to \(F_{SPR}\) or \(F_{B_{TARGET}}\).
SPR_MSY spr at \(F\) intensity associated with msy.
Fstd_MSY \(F\) statistic at \(F\) intensity associated with msy.
TotYield_MSY Total yield (biomass) at msy.
RetYield_MSY Retained yield (biomass) at msy.

12.7 Brief cumulative output

Cum_Report.sso: contains a brief version of the run output, which is appended to current content of file so results of several runs can be collected together. This is especially useful when a batch of runs is being processed. Unless this file is deleted, it will contain a cumulative record of all runs done in that subdirectory. The first column contains the run number.

12.8 Bootstrap Data Files

It is possible to create bootstrap data files for SS3 where an internal parametric bootstrap function generates a simulated data set by parametric bootstrap sampling the expected values given the input observation error. Starting in v.3.30.19, bootstrap data files are output separated in single numbered files (e.g., data_boot_001.ss). In version prior to v.3.30.19 a single file called data.ss_new was output that contained multiple sections: the original data echoed out, the expected data values based on the model fit, and then subsequent bootstrap data files.

Specifying the number of bootstrap data files has remained the same across model versions. Creating bootstrap data files is specified in the starter file via the “Number of datafiles to produce” line where a value of 3 or greater will create three files: the original data file, data_echo.ss_new, a data file with the model expected values, data_expval.ss, and single bootstrap data file, data_boot_001.ss. The first output provides the unaltered input data file (with annotations added). The second provides the expected values for only the data elements used in the model run. The third and subsequent outputs provide parametric bootstraps around the expected values.

The bootstrapping procedure within SS3 is done via the following steps:

Given this, there are some assumptions implicit in the bootstrapping procedure (as implemented as of v.3.30.17) that users should be aware of:

12.9 Forecast and Reference Points (Forecast-report.sso)

The Forecast-report file contains output of fishery reference points and forecasts. It is designed to meet the needs of the Pacific Fishery Management Council’s Groundfish Fishery Management Plan, but it should be quite feasible to develop other regionally specific variants of this output.

The vector of forecast recruitment deviations is estimated during an additional model estimation phase. This vector includes any years after the end of the recruitment deviation time series and before or at the end year. When this vector starts before the ending year of the time series, then the estimates of these recruitments will be influenced by the data in these final years. This is problematic, because the original reason for not estimating these recruitments at the end of the time series was the poor signal/noise ratio in the available data. It is not that these data are worse than data from earlier in the time series, but the low amount of data accumulated for each cohort allows an individual datum to dominate the model’s fit. Thus, an additional control is provided so that forecast recruitment deviations during these years can receive an extra weighting in order to counter-balance the influence of noisy data at the end of the time series.

An additional control is provided for the fraction of the log-bias adjustment to apply to the forecast recruitments. Recall that R is the expected mean level of recruitment for a particular year as specified by the spawner-recruitment curve and R’ is the geometric mean recruitment level calculated by discounting R with the log-bias correction factor \(e-0.5s^2\). Thus, a log-normal distribution of recruitment deviations centered on R’ will produce a mean level of recruitment equal to R. During the modeled time series, the virgin recruitment level and any recruitments prior to the first year of recruitment deviations are set at the level of R, and the log-normal recruitment deviations are centered on the R’ level. For the forecast recruitments, the fraction control can be set to 1.0 so that 100% of the log-bias correction is applied and the forecast recruitment deviations will be based on the R’ level. This is certainly the configuration to use when the model is in mcmc mode. Setting the fraction to 0.0 during maximum likelihood forecasts would center the recruitment deviations, which all have a value of 0.0 in maximum likelihood mode, on R. Thus would provide a mean forecast that would be more comparable to the mean of the ensemble of forecasts produced in mcmc mode. Further work on this topic is underway.

Note:

The top of the Forecast-report file shows the search for \(F_{SPR}\) and the search for \(F_{MSY}\), allowing the user to verify convergence. Note: if the STD file shows aberrant results, such as all the standard deviations being the same value for all recruitments, then check the \(F_{MSY}\) search for convergence. The \(F_{MSY}\) can be calculated, or set equal to one of the other \(F\) reference points per the selection made in starter.ss.

13 Using R To View Model Output (r4ss)

The R package r4ss includes tools for summarizing and plotting results, manipulating files, visualizing model parameterizations, and other tasks. Currently, information about r4ss can be found on GitHub. The software package is under continuous development to maintain compatibility with new versions of SS3 and to improve functionality.

The latest version of r4ss can be installed directly from GitHub at any time via the remotes package in R with the following commands:

	> install.packages(``remotes'')
	> remotes::install_github(``r4ss/r4ss'')
	

Once the r4ss package is installed, it can be loaded:

	> library(r4ss)
	

Two of the most commonly used functions for model diagnostics are SS_output() and SS_plots(). After running a model using SS3, the output files including Report.sso can be read into R using the SS_output() function which stores quantities in a list with named objects. This list can then be passed to the SS_plots() function, which creates a series of over 100 plots that are useful for visualizing output such as model fits to the data. For example, plots can be created using model output available in the directory “C:/myfiles/mymodels/myrun”:

	> base.model <- SS_output(``C:/myfiles/mymodels/myrun'')
	> SS_plots(base.model)
	

The core functions available in r4ss include:

Core Functions
SS_output A function to create a list object for the output from Stock Synthesis
SS_plots Plot many quantities related to output from Stock Synthesis
Download the SS3 Executable:
get_ss3_exe() Download the latest version or a specified version of the SS3 executable
Model comparisons and other diagnostics:
SSsummarize() Read output from multiple SS3 models
SStableComparison() Make table comparing quantities across models
SSplotComparison() Plot output from multiple SS3 models
SSplotPars() Plot distributions of priors, posteriors, and estimates
SS_profile() Run likelihood parameter profiles
SSplotProfile() Plot likelihood profile results
PinerPlot() Plot fleet-specific contributions to likelihood profile
Jitter() Run multiple model jitters to determine best model fit
SS_doRetro() Run retrospective analysis
SSmohnsrho() Calculate Mohn’s Rho values
SSplotRetroRecruits() Make retrospective pattern of recruitment estimates (a.k.a. squid plot) as seen in Pacific hake assessments
SS_fitbiasramp() Estimate bias adjustment for recruitment deviates
File manipulation for inputs:
SS_readdat() Read data file
SS_readctl() Read control file
SS_readforecast() Read forecast file
SS_readstarter() Read starter file
SS_readwtatage() Read weight-at-age file
SS_writedat() Write data file
SS_writectl() Write control file
SS_writeforecast() Write forecast file
SS_writestarter() Write starter file
SS_writewtatage() Write weight-at-age file
SS_makedatlist() Make a list for SS3 data
SS_parlines() Get parameter lines from SS3 control file
SS_changepars() Change parameters in the control file
SSmakeMmatrix() Create inputs for entering a matrix of natural mortality by age and year
SS_profile() Run a likelihood profile in SS3 (incomplete)
NegLogInt_Fn() Calculated variances of time-varying parameters using SS3 implementation of the Laplace Approximation
File manipulations for outputs:
SS_recdevs() Insert a vector of recruitment deviations into the control file

14 Advanced Stock Synthesis Configuration Settings and Advice

14.1 Using Time-Varying Parameters

14.1.1 Time-Varying Parameters

Starting in v.3.30, mortality-growth, some stock-recruitment, catchability, and selectivity base parameters can be time varying. Note that as of v.3.30.16, time-varying parameters cannot be used with tagging parameters. There are four ways a parameter can be time-varying in SS3:

  1. Environmental or Density dependent Linkages: Links the base parameter with environmental data or a model derived quantity.

  2. Parameter deviations: Creates annual deviations from the base parameter during a user-specified range of years.

  3. Time blocks: The base parameter is changed during a “block” (or “blocks”) of time (i.e., one or more consecutive years) as specified by the user.

  4. Trends: A trend (shape: cumulative normal distribution function) is applied to the parameter. Trends are specified using the same input column as time blocks, but with different codes. This means that trends and time blocks cannot be used simultaneously for the same base parameter.

Environmental and density dependent linkages, parameter deviations, and either time blocks or trends can be applied to the same base parameter. The model processes each time-varying parameter specification (first time blocks and trends, then environmental linkages, then parameter deviations) and creates a time series of intermediate values that are used as the model subsequently loops through years.

Some examples of time-varying setups.

Some examples of time-varying setups.

14.1.2 Specification of Time-Varying Parameters: Long Parameter Lines

Time-varying specifications for a parameter are invoked using elements 8 - 14 in the long parameter line setup. Each element and the options for selection related to time-varying parameters are as described below.

Code for the deviation link can be found in SS_timevaryparm.tpl, search for “SS_Label_Info_14.3”.

14.1.3 Specification of Time-Varying Parameters: Short Parameter Lines

If a time-varying specification set up in the long parameter lines for a particular section requires additional parameters, short parameter lines need to be created following the long parameter lines for the section (unless autogeneration is used, which creates short parameter lines in control.ss\_new upon running the model). The number of parameter lines required depends on the time-varying parameter specification.

For example, if two parameters were specified to have environmental linkages in themg parameter section, below the mg parameters would be two parameter lines (when not auto-generating these lines), which is an environmental linkage parameter for each time-varying base parameter:

Prior Prior Prior
LO HI INIT Value sd Type Phase Parameter Label
Prior Prior Prior
LO HI INIT Value sd Type Phase Parameter Label
-99 99 1 0 0.01 0 -1 #Wtlen_1_Fem_ENV_add
-99 99 1 0 0.01 0 -1 #Wtlen_2_Fem_ENV_add

In Stock Synthesis v.3.30, the time-varying input short parameter lines are organized such that all parameters that affect a base parameter are clustered together with time blocks (or trend) first, then environmental linkages, then parameter deviations. For example, if the mg base parameters 3 and 7 had time varying changes, the order would look like:

mg base parameter 3 Block parameter 3-1
Block parameter 3-2
Environmental link parameter 3-1
Deviation se parameter 3
Deviation \(\rho\) parameter 3
mg base parameter 7 Block parameter 7-1
Deviation se parameter 7
Deviation \(\rho\) parameter 7

The number of short parameter lines for each time-varying setup selected depends on the selection options. The autogeneration feature can be used to figure out which parameter lines are needed. The short parameter lines needed for different time-varying options are:

14.1.4 Example Time-varying Parameter Setups

The time-varying parameter options in Stock Synthesis are flexible. Below are some example setups that illustrate how the time-varying options could be used in a model, although there are many more possible setups.

14.1.4.1 Environmental and density dependent linkages


14.1.4.2 Parameter Deviations


14.1.4.3 Time Blocks



14.1.5 Time-Varying Growth Considerations

When time-varying growth is used, there are some additional considerations to be aware of:

14.1.6 Time-Varying Stock-Recruitment Considerations

14.1.7 Forecast Considerations with Time-Varying Parameters

Users should judiciously consider which parameter values are applied during forecast years. SS3 will default to use all base parameter values during the forecast period, but alternatively, which years of selectivity, relative F, and recruitment should be used during the forecast period by specifying in the forecast file.

Time-varying parameters can extend into the forecast period. For example, a parameter with a time block that stops at the model end year will revert to the base parameter value for the forecast, but when the block definition extends to include some or all forecast years, the last block will apply to the forecast. A good practice is to use 9999 as the terminal year for the last block to ensure including all forecast years. If a parameter has deviations and the deviations’ year range includes the forecast years, then the parameter will have process uncertainty in the forecast years and mcmc draws(if using) will include the variability.

14.2 Parameterizing the Two-Dimensional Autoregressive Selectivity

When the two-dimensional autoregressive selectivity (2DAR) feature is turned on for a fleet, the selectivity is calculated as a product of the assumed selectivity pattern and a non-parametric deviation term deviating from this assumed pattern:

\[\hat{S}_{a,t} = S_aexp^{\epsilon_{a,t}}\]

where \(S_a\) is specified in the corresponding age/length selectivity types section, and it can be either parametric (recommended) or non-parametric (including any of the existing selectivity options in SS3); \(\epsilon_{a,t}\) is simulated as a two-dimensional first-order autoregressive (2DAR) process:

\[vec(\epsilon) \sim MVN(\mathbf{0},\sigma_s^2\mathbf{R_{total}})\]

where \(\epsilon\) is the two-dimensional deviation matrix and \(\sigma_s^2\mathbf{R_{total}}\) is the covariance matrix for the 2DAR process. More specifically, \(\sigma_s^2\) quantifies the variance in selectivity deviations and \(\mathbf{R_{total}}\) is equal to the Kronecker product (\(\otimes\)) of the two correlation matrices for the among-age and among-year AR processes:

\[\mathbf{R_{total}}=\mathbf{R}\otimes\mathbf{\tilde{R}}\]

\[\mathbf{R}_{a,\tilde{a}}=\rho_a^{|a-\tilde{a}|}\]

\[\mathbf{\tilde{R}}_{t,\tilde{t}}=\rho_t^{|t-\tilde{t}|}\]

where \(\rho_a\) and \(\rho_t\) are the among age and among year AR coefficients, respectively. When both of them are zero, \(\mathbf{R}\) and \(\mathbf{\tilde{R}}\) are two identity matrices and their Kronecker product, \(\mathbf{R_{total}}\), is also an identity matrix. In this case selectivity deviations are essentially identical and mutually independent:

\[\epsilon_{a,t}\sim N(0,\sigma_s^2)\]

14.2.0.1 Using the Two-Dimensional Autoregressive Selectivity


See Xu et al. (2019) and Xu, Thorson, and Methot (2020) for information on tuning the 2DAR selectivity parameters. There is not yet a generalized method to automate the tuning, so the information below provides a general framework. Additionally the stand-alone tmb code used in Xu et al. (2019) to estimate the two correlation coefficients for selectivity deviations (rho_a and rho_t) outside SS3 [is available on GitHub](https://github.com/HaikunXu/2DAR4ss/tree/main) along with a [user manual](https://github.com/HaikunXu/2DAR4ss/blob/main/User

First, fix the two AR coefficients (\(\rho_a\) and \(\rho_t\)) at 0 and tune \(\sigma_s\) iteratively to match the relationship:

\[\sigma_s^2=SD(\epsilon)^2+\frac{1}{(a_{max}-a_{min}+1)(t_{max}-t_{min}+1)}\sum_{a=a_{min}}^{a_{max}}\sum_{t=t_{min}}^{t_{max}}SE(\epsilon_{a,t})^2\]

The minimal and maximal ages/lengths and years for the 2DAR process can be freely specified by users in the control file. However, we recommend specifying the minimal and maximal ages and years to cover the relatively “data-rich” age/length and year ranges only. Particularly we introduce:

\[b=1-\frac{\frac{1}{(a_{max}-a_{min}+1)(t_{max}-t{min}+1)}\sum_{a=a_{min}}^{a_{max}}\sum_{t=t_{min}}^{t_{max}}SE(\epsilon_{a,t})^2}{\sigma_s^2}\]

as a measure of how rich the composition data is regarding estimating selectivity deviations. We also recommend using the Dirichlet-multinomial method to “weight” the corresponding composition data while \(\sigma_s\) is interactively tuned in this step.

Second, fix \(\sigma_s\) at the value iteratively tuned in the previous step and estimate \(\epsilon_{a,t}\). Plot both Pearson residuals and \(\epsilon_{a,t}\) out on the age-year surface to check their 2D dimensions. If their distributions seems to be not random but rather be autocorrelated (deviation estimates have the same sign several ages and/or years in a row), users should consider estimating and then including the autocorrelations in \(\epsilon_{a,t}\).

14.3 Continuous seasonal recruitment

Setting up a seasonal model such that recruitment can occur with similar and independent probability in any season of any year is awkward in SS3. Instead, SS3 can be set up so that each quarter appears as a year (i.e., a seasons-as-years model). All the data and parameters are set up to treat quarters as if they were years. Note that setting up a seasons-as-years model also requires that all rate parameters be re-scaled to correctly account for the quarters being treated as years.

Other adjustments to make when using seasons as years include:

15 Detailed Information on Stock Synthesis Processes

The processes and calculations within SS3 can be complex and not transparent based on the model input files. Here, additional information on processes within SS3 is provided.

15.1 Jitter

The following steps are now performed to determine the jittered starting parameter values (illustrated in Figure 3):

  1. A normal distribution is calculated such that the \(pr(P_{MIN}) = 0.1\%\) and the \(pr(P_{MAX}) = 99.9\%\).

  2. A jitter shift value, termed “\(K\)”, is calculated from the distribution equal to \(pr(P_{CURRENT})\).

  3. A random value is drawn, “\(J\)”, from the range of \(K\)-jitter to \(K\)+jitter.

  4. Any value which falls outside the 0-1 range (in the cumulative normal space) is mapped back from the bound to a point one-tenth of the way from the bound to the initial value.

  5. \(J\) is a new cumulative normal probability value.

  6. Calculate a new parameter value, \(P_{JITTERED}\), such that \(pr(P_{JITTERED}) = J\).

Plot showing parameter space on the x-axis along and transformed space on the y-axis. A cumulative normal line is shown where the 0.001 and 0.999 quantiles are set to min and max respectively. A vertical stack of horizontal bars show the distribution of transformed initial values plus U. The distribution is shown on the parameter space axis with the initial input value in gray and the new init in red. Red arrows on the cumulative normal line show the random U written as negative jitter value comma positive jitter value.

Illustration of the jitter algorithm.

In SS3, the jitter fraction defines a uniform distribution in cumulative normal space +/- the jitter fraction from the initial value (in cumulative normal space). The normal distribution for each parameter, for this purpose, is defined such that the minimum bound is at 0.001, and the maximum at 0.999 of the cumulative distribution. If the jitter faction and original initial value are such that a portion of the uniform distribution goes beyond 0.001 or 0.999 of the cumulative normal, the new value is set to one-tenth of the way from the bound to the original initial value.

Therefore, \(\sigma\) = (max-min) / 6.18. For parameters that are on the log-scale, sigma may be the correct measure of variation for jitters, for real-space parameters, cv = \(\sigma\)/(original initial value) may be a better measure.

If the original initial value is at or near the middle of the min-max range, then for each 0.1 of jitter, the range of jitters extends about 0.25 sigmas to either side of the original value (though as the total jitter increases the relationship varies more than this), and the average absolute jitter is about half of that. For values far from the middle of the min-max range, the resulting jitter is skewed in parameter space, and may hit the bound, invoking the resetting mentioned above.

To evaluate the jittering, the bounds, and the original initial values, a jitter_info table is available from r4ss, including sigma, cv and InitLocation columns (the latter referring to location within the cumulative normal - too close to 0 or 1 indicates a potential issue).

Note: parameters with min \(\leq\) -99 or max \(\geq\) 999 are not jittered to avoid unreasonable values (a warning is produced to indicate this).

15.2 Parameter Priors

Priors on parameters fulfill two roles in SS3. First, for parameters provided with an informative prior, SS3 is receiving additional information about the true value of the parameter. This information works with the information in the data through the overall log likelihood function to arrive at the final parameter estimate. Second, diffuse priors provide only weak information about the value of a prior and serve to manage model performance during execution. For example, some selectivity parameters may become unimportant depending upon the values of other parameters of that selectivity function. In the double normal selectivity function, the parameters controlling the width of the peak and the slope of the descending side become redundant if the parameter controlling the final selectivity moves to a value indicating asymptotic selectivity. The width and slope parameters would no longer have any effect on the log likelihood, so they would have no gradient in the log likelihood and would drift aimlessly. A diffuse prior would then steer them towards a central value and avoid them crashing into the bounds. Another benefit of diffuse priors is the control of parameters that are given unnaturally wide bounds. When a parameter is given too broad of a bound, then early in a model run it could drift into this tail and potentially get into a situation where the gradient with respect that parameter approaches zero even though it is not at its global best value. Here the diffuse prior helps move the parameter back towards the middle of its range where it presumably will be more influential and estimable.

The options for parameter priors are described as a function of \(Pval\), the value of the parameter for which a prior is being calculated, as well as the parameter bounds in the case of the beta distribution (\(Pmax\) and \(Pmin\)), and the input values for \(Prior\) and \(Pr\_SD\), which in some cases are the mean and standard deviation, but interpretation depends on the prior type. The Prior Likelihoods below represent the negative log likelihood in all cases.

15.2.0.1 Prior Types


Note that the numbering in v.3.30 is different from that used in v.3.24 (where confusingly -1 indicated no prior and 0 indicated a normal prior). The calculation of the negative log likelihood is provided below for each prior types, as a function of the following inputs:

\(P_\text{init}\) The value of the parameter for which a prior is being calculated where init can either be
the initial un-estimated value or the estimated value (3rd column in control or
control.ss_new file)
\(P_\text{LB}\) The lower bound of the parameter (1st column in control file)
\(P_\text{UB}\) The upper bound of the parameter (2nd column in control file)
\(P_\text{PR}\) The input value for the prior input (4th column in control file)
\(P_\text{PRSD}\) The standard deviation input value for the prior (5th column in control file)

15.3 Forecast Module: Benchmark and Forecasting Calculations

Stock Synthesis v.3.20 introduced substantial upgrades to the benchmark and forecast module. The general intent was to make the forecast outputs more consistent with the requirement to set catch limits that have a known probability of exceeding the overfishing limit. In addition, this upgrade addressed several inadequacies with the previous module, including:

The v.3.20 module addressed these issues by:

15.3.0.1 Multiple Pass Forecast


The most complicated aspect of the changes is with regard to the multiple pass aspect of the forecast. This multiple pass approach is needed to calculate both ofl and abc in a single model run. More importantly, the multiple passes are needed in order to mimic the actual sequence of assessment-management action - catch over a multi-year period. The first pass calculates ofl based on catching ofl each year, so presents the absolute maximum upper limit to catches. The second pass forecasts a catch based on a harvest policy, then applies catch caps and allocations, then updates the \(F\)’s to match these catches. In the third pass, stochastic recruitment and catch implementation error are implemented and SS3 calculates the \(F\) that would be needed in order to catch the adjusted catch amount previously calculated in the second pass. With this approach, SS3 is able to produce improved estimates of the probability that \(F\) would exceed the overfishing \(F\). In effect it is the complement of the P* approach. Rather than the P* approach that calculates the stream of annual catches that would have an annual probability of \(F > F_{LIMIT}\), SS3 calculates the expected time series of P* that would result from a specified harvest policy implemented as a buffer between \(F_{TARGET}\) and \(F_{LIMIT}\).

The sequence of multiple forecast passes is as follows:

  1. Pass 1 (a.k.a. Fcast_Loop1)

    1. Loop Years

      1. SubLoop (a.k.a. ABC_Loop) = 1

        1. R = \(f(SSB)\) with no deviations

        2. \(F\) = \(F_{LIMIT}\)

        3. Fixed input catch amounts ignored

        4. No catch adjustments (caps and allocations)

        5. No implementation error

        6. Result: ofl conditioned on catching ofl each year

  2. Pass 2

    1. Loop Years

      1. SubLoop = 1

        1. R = \(f(SSB)\) with no deviations

        2. \(F\) = \(F_{LIMIT}\)

        3. Fixed input catch amounts ignored

        4. No catch adjustments (caps and allocations)

        5. No implementation error

        6. Result: ofl conditioned on catching abc previous year. Stored in std_vector.

      2. SubLoop = 2

        1. R = \(f(SSB)\) with no deviations

        2. \(F\) = \(F_{TARGET}\) to get catch for each fleet in each season

        3. Fixed input catch amounts replace catch from step 2

        4. Catch adjustments (caps and allocations) applied on annual basis (after looping through seasons and areas within this year). These adjustments utilize the logistic joiner approach so the overall results remain completely differentiable.

        5. No implementation error

        6. Result: abc as adjusted for caps and allocations

      3. SubLoop = 3

        1. R = \(f(SSB)\) with no deviations

        2. Catches from Pass 2 multiplied by the random term for implementation error

        3. \(F\) = adjusted to match the catch*error while taking into account the random recruitments. This is most easily visualized in a mcmc context where the recruitment deviation and the implementation error deviations take on non-zero values in each instance. In mle, because the forecast recruitments and implementation error are estimated parameters with variance, their variance still propagates to the derived quantities in the forecast.

        4. Result: Values for \(F\), ssb, Recruitment, Catch are stored in std-vectors

          • In addition, the ratios \(F\)/\(F_{LIMIT}\) and \(SSB/SSB_{TARGET}\) or \(SSB/SSB_{TARGET}\) are also stored in std_vectors.

          • Estimated variance in these ratios allows calculation of annual probability that \(F > F_{LIMIT}\) or \(B < B_{LIMIT}\). This is essentially the realized P* conditioned on the specified harvest policy.

15.3.0.2 Example Effects on Correlations


An example that illustrates the above process was conducted. The situation was a low \(M\), late-maturing species, so changes are not dramatic. The example conducted a 10-year forecast and examined correlations with derived quantities in the last year of the forecast. This was done once with the full set of 3 passes as described above, and again with only 2 passes and stochastic recruitment occurring in pass 2, rather than 3. This alternative setup is more similar to forecasts done using previous model versions.

2 Forecast Passes with \(F\) from 2 Forecast Passes with catch from
abc and random recruitment target \(F\) and equilibrium recruitment
Factor X Factor Y Corr Factor X Factor Y Corr
A1 F 2011 RecrDev 2002 -0.126 A2 F 2011 RecrDev 2002 0.090
B1 F 2011 Recr 2002 0.312 B2 F 2011 Recr 2002 0.518
C1 ForeCatch 2011 RecrDev 2002 0.000 C2 ForeCatch 2011 RecrDev 2002 0.129
D1 ForeCatch 2011 Recr 2002 0.455 D2 ForeCatch 2011 Recr 2002 0.555

Correlation A2 shows a small positive correlation between the recruitment deviation in 2002 and the \(F\) in 2011. This is probably due to the fact that a positive deviation in recruitment in 2002 will reduce the chances that the biomass in 2011 will be below the inflection point in the control rule. This occurs because in calculating catch from \(F\), the model effectively “knows” the future recruitments. I predict that this B1 correlation would be near zero if there was no inflection in the control rule.

Correlation A1 shows this turning into a negative correlation. This is because the future catches are first calculated from equilibrium recruitment, then when random recruitments are implemented, a positive recruitment deviation will cause a negative deviation in the \(F\) needed to catch that now “fixed” amount of future catch.

Correlations B1 and B2 are in terms of absolute recruitment, not recruitment deviation. Now overall model conditions that cause a higher absolute recruitment level will also result in a higher forecast level. No surprise there, and the correlation is stronger when variance is based on catch is calculated from \(F\) (B2).

Correlation C2 shows a positive correlation between recruitment deviation in 2002 and forecast catch in 2011. However, correlation C1 is 0.0 because the forecast catch in 2011 is set based on equilibrium recruitment and is not influenced by the recruitment deviations.

15.3.0.3 Future Work


16 Guidance on Population Dynamics Modeling

Numerous assessment related topics have arisen among users. A collection of general information on population dynamics modeling is provided here. This information is relative to both users of SS3 and users of other stock assessment modeling platforms.

16.1 Data Weighting

In 2015 there was a capam workshop dedicated to data weighting. Description of the workshop can be found on the capam website. The presentations from the workshop are available through that website and many of them were included in a special issue of Fisheries Research.

Currently, there are three main methods for weighting length and data applied for U.S. West Coast assessments using Stock Synthesis.

  1. McAllister - Ianelli: Effective sample size is calculated from fit of observed to expected length or age compositions. Tuning algorithm is intended to make the arithmetic mean of the input sample size equal to the harmonic mean of the effective sample size (McAllister and Ianelli 1997).

  2. Francis: Based on variability in the observed mean length or age by year, where the sample sizes are adjusted such that the fit of the expected mean length or age should fit within the uncertainty intervals at a rate which is consistent with variability expected based on the adjusted sample sizes (Method “TA1.8”) (Francis and Hilborn 2011)

  3. Dirichlet-Multinomial: A new likelihood (as opposed to the standard multinomial) which includes an estimable parameter (theta) which scales the input sample size. In this case, the term “Effective sample size” has a different interpretation than in the McAllister-Ianelli approach (Thorson et al. 2017).

16.1.0.1 Applying the methods


16.1.0.2 McAllister-Ianelli


The “Length_Comp_Fit_Summary” and “Age_Comp_Fit_Summary” sections in the Report file include information on the harmonic mean of the effective sample size and arithmetic mean of the input sample size used in this tuning method. In the r4ss package, these tables are returned by the SS_output() function as $Length_comp_Eff_N_tuning_check and $Age_comp_Eff_N_tuning_check.

A convenient way to process these values into the format required by the control file is to use the function:

SS_tune_comps(replist, option = “MI”)

where the input “replist” is the object created by SS_output(). This function will return a table and also write a matching file called suggested_tuning.ss to the directory where the model was run.

For version 3.30 models, the table created by SS_tune_comps() can be pasted into the bottom of the control file in the section labeled “Input variance adjustments”, followed by the terminator line which indicates the end of the section.

Also see the help page for the r4ss function SS_varadjust() which can be used to automatically write a new control file if you want to streamline the process of applying multiple iterations of this tuning method.

If the tuning has been implemented, the green lines in the figure below would approximately intersect at a point which is on the black 1-to-1 diagonal line in this figure created by the r4ss function SS_plots().

A plot produced from SS\_plots() in r4ss showing the results from the implementation of the McAllister-Ianelli data-weighting method to the model output using the SS\_tune\_comps() function. The observed sample size is on the x-axis and the effective sample size is on the y-axis with a horizontal green dashed line indicating the harmonic mean and a vertical green dashed line indicating the arithmetic mean.

The relationship between the observed sample size (the input sample number) versus the effective sample size where the effective sample size is the product of the input sample size and the data weighting applied to the data set.

There are a couple of challenges posed by the McAllister-Ianelli data-weighting approach:

  1. Subjective choice of how many iterations to take to achieve adequate convergence. Often just one iteration is applied.

  2. Takes time to implement so tuning is rarely repeated during retrospective or sensitivity analyses.

16.1.0.3 Francis


Implementation: recommended adjustments are calculated by the r4ss functions SSMethod.TA1.8() and SSMethod.Cond.TA1.8(). These functions are rarely used alone but are called by the SS_plots() function when making plots like the one below. For version 3.30 models, the simplest way to get the adjustments in the format required by the control file is to use the SS_tune_comps() function (described above under the McAllister-Ianelli method), but with a different option specified:

SS_tune_comps(replist, option = “Francis”)

The figure below shows the estimated 95% intervals around the observed mean length by year based on the input sample size (thick lines) and the increase in that uncertainty which would occur if the sample sizes were adjusted according to the proposed multiplier.

A plot produced from SS\_plots() in r4ss showing the results from the implementation of the Francis data-weighting method to the output using the SS\_tune\_comps() function. Year is on the x-axis and mean length is on the y-axis. There are 95\% confidence interval boxes around the mean displayed for each year. A blue line shows the expected variation of mean length across years.

The mean length of the length samples for each year from the MexCal S1 NSP fleet with 95% confidence intervals based on current samples sizes using the Francis data weighting method (referred to as TA1.8). Thinner intervals with capped ends show result of further adjusting sample sizes based on the suggested multiplier, 0.2739, with 95% intervals for length data from the fleet. The blue line shows the expected variation of the mean length across years.

There are a several of challenges posed by the Francis data-weighting approach:

  1. Subjective choice of how many iterations to take to achieve adequate convergence. Often, just one iteration is applied.

  2. Takes time to implement so tuning is rarely repeated during retrospective or sensitivity analyses.

  3. Recommended adjustment can be sensitive to outliers (remove a few years of anomalous composition data can lead to large change in recommended adjustment).

Finally, in simulation work comparing both the Francis and McAllister-Ianelli data weighting approaches, indicate that the arithmetic averaging of effective samples sizes from the McAllister-Ianelli approach was inferior to other methods (Punt 2016).

16.1.0.4 Dirichlet-multinomial


The Dirichlet-multinomial should only be used when there is a substantive basis for setting an approximate value for input sample size. This is important because:

To develop an input-sample-size, Jim Thorson recommends:

  1. Using nonparametric (bootstrapping), design-based, or model-based estimators to identify the variance of expanded compositional data, and then deriving an input-sample-size from that;

  2. Using a priori reasoning, e.g., about fishery compositions deserving a lower input-sample-size than a survey in cases when the input-sample-size for the survey is known, but that for the fishery is unknown.

  3. If the proceeding options are not feasible, using accepted standards in that region and review context, i.e., assigning input-sample-size of arbitrary value “X” if that is what has always been done in that region/stock, there’s no basis for changing it, and it is greater than or equal to the likely effective sample size;

Change the choice of likelihood and set parameter choices in the data file:

Add parameter lines to the control file:

The SS_output() function in r4ss returns table like the following:

  $Dirichlet_Multinomial_pars
                     Value Phase Min Max     Theta Theta/(1+Theta)
  ln(DM_theta)_1 -0.164022     2  -5  20 0.8487233       0.4590862
  ln(DM_theta)_2  2.246280     2  -5  20 9.4525070       0.9043292

The ratio shown in the final column is the estimated multiplier which can be compared to the sample size adjustment estimated in the other tuning methods above (the New_Var_adj column in the table produced by the SS_tune_comps() function in r4ss).

If the reported \(\theta/(1+\theta)\) ratio is close to 1.0, that indicates that the model is trying to tune the sample size as high as possible. In this case, the \(ln(\theta)\) parameters should be fixed at a high value, like the upper bound of 20, which will result in 100% weight being applied to the input sample sizes. An alternative would be to manually change the input sample sizes to a higher value so that the estimated weighting will be less than 100

Note that a constant of integration was added to the Dirichlet-multinomial likelihood equation in v.3.30.17. This will change the likelihood value, but parameter estimates and expected values should remain the same as in previous versions of SS3.

Some challenges posed by the Dirichlet-multinomial data-weighting approach:

  1. Does not allow weights above 100% (by design) so it is not yet clear how best to deal with the situation when the estimated weight is close to 100%.

  2. Parameterization has potential to cause convergence issues or inefficient mcmc sampling when weights are close to 100% if no prior is applied as discussed above.

16.2 Recruitment Variability and Bias Correction

Recruitments are defined as log-normal deviates around a log-bias adjusted spawner-recruitment curve. The magnitude of the log-bias adjustment is calculated from the level of \(\sigma_R\), which is the standard deviation of the recruitment deviations (in natural log space). There are 5 segments of the time series in which to consider the effect of the log-bias adjustment: virgin; initial equilibrium; early data-poor period; data-rich period; very-recent/forecast. The choice of break points between these segments need not correspond directly with the settings for the bias adjustment, although some alignment might be desired. Methot and Taylor (2011) provide more detailed discussion of the bias adjustment than what is provided below but do not address the separation of time periods into separate segments. The approach is illustrated with figures associated the assessment for darkblotched rockfish (Gertseva and Thorson, James T 2013).

A plot showing the spawner-recruitment relationship output from SS\_plots in r4ss where the x-axis is spawning biomass in metric tons and the y-axis is recruitment in thousands. Red circles represent estimated recruitments. The solid black line shows the stock-recruit relationship and the green line shows the adjusted relationship to account for the log-normal distribution associated with each year.

Spawner-recruitment relationship for darkblotched rockfish (Gertseva and Thorson, James T 2013). Red points represent estimated recruitments, the solid black line is the stock-recruit relationship and the green line represents the adjustment to this relationship after adjustment to account for the log-normal distribution associated with each year. The “+” symbol labeled 1915 near the right side represents both the virgin and initial equilibrium of the model. The numerous red points close to the initial conditions correspond to the early years of the model with low harvest rates.

A time-series plot from SS\_plots output in r4ss showing the year on the x-axis and the natural log recruitment deviations on the y-axis. Caps around the mean show the 95 percent uncertainty interval for each year.

Time series of natural log recruitment deviations for darkblotched rockfish with 95% uncertainty intervals. The start year of the model is 1915, but recruitment deviations are estimated starting in 1870. The 45 deviation estimates for 1870–1914 inform the age structure used in the start year. The black color for the years 1960–2011 indicates the “main” recruitment deviation vector, while the blue color for the years 1870–1959 and 2012–2024 indicates the “early” and “late/forecast” recruitment deviation vectors, respectively.

16.2.0.1 Virgin Spawning Biomass


The \(R_{0}\) level of recruitment is a parameter of the spawner-recruitment curve. This recruitment and the corresponding spawning biomass are expected to represent the long-term arithmetic mean.

16.2.0.2 Initial Equilibrium


The level of recruitment is typically maintained at the \(R_{0}\) level even though the initial equilibrium catch will reduce the spawning biomass below the virgin level. If steepness is moderately low or the initial \(F\) is high, then the lack of response in recruitment level may appear paradoxical. The logic is that building in the spawner-recruitment response to initial \(F\) would significantly complicate the calculations and would imply that the initial equilibrium catch level had been going on for multiple generations. If the lack of response is considered to be problematic in a particular application, then start the model at an earlier year and with a lower initial equilibrium catch so that the dynamics of the spawner-recruitment response get captured in the early period, rather than getting lost in the initial equilibrium.

16.2.0.3 Early Data-Poor Period


This is the early part of the time series where the only data typically are landed catch. There are no data to inform the model about the specific year-to-year fluctuations in recruitment, although the ending years of this period will begin to be influenced by the data. The “early time period” is not a formal concept. It is up to the user to decide whether to start estimating recruitment deviations beginning with the first year of the model, or to delay such estimation until the data become more informative. Modeling recruitment deviations in this period may lead to a more realistic portrayal of the uncertainty in depletion, but can also lead to spurious patterns in estimated recruitments that may be driven by the fit to index data or other sources that would not be expected to have accurate information on recruitment.

16.2.0.4 Data-Rich Period


Here the length and or age data inform the model on the year-to-year level of recruitment. These fluctuations in recruitment are assumed to have a log-normal distribution around the log-bias adjusted spawner-recruitment curve. The level of \(\sigma_R\) input to the model should match this level of fluctuation to a reasonable degree. Because the recruitments are log-normal, they produce a mean biomass level that is comparable to the virgin spawning biomass and thus the depletion level can be calculated without bias. However, if the early period has recruitment deviations estimated by maximum posterior density, then the depletion levels during the early part of the data-rich period may have some lingering effect of negative bias during the early time period. The level of \(\sigma_R\) should be at least as large as the level of variability in these estimated recruitments. If too high a level of \(\sigma_R\) is used, then a bias can occur in the estimate of spawner-recruitment steepness, which determines the trend in recruitment. This occurs when the early recruitments are taken directly from the spawner- recruitment curve, so are mean unbiased, then the later recruitments are estimated as deviations from the log-bias adjusted curve. If \(\sigma_R\) is too large, then the bias-adjustment is too large, and the model may compensate by increasing steepness to keep the mean level of recent recruitments at the correct level.

16.2.0.5 Recent Years/Forecast


Here the situation is very similar to the early time period in that there are no data to inform the model about the year-to-year pattern in recruitment fluctuations so all deviations will be pulled to a zero level in the maximum posterior density. The structure of SS3 creates no sharp dividing line between the estimation period and the forecast period. In many cases one or more recruitments at the end of the time series will lack appreciable signal in the data and should therefore be treated as forecast recruit deviations. To the degree that some variability is observed in these recruitments, partial or full bias correction may be desirable for these deviations separate from the purely forecast deviations, there is therefore an additional control for the level of bias correction applied to forecast deviations occurring prior to end year + 1.

Time-series plot produced using SS\_plots in r4ss with year as the x-axis and asymptotic standard error estimates on the y-axis for the natural log recruitment deviations of darkblotched rockfish. The points in black are the main recruitment period and the red line at 0.75 indicates the sigma R value in this model.

Time series of standard error estimates for the natural log recruitment deviations for darkblotched rockfish with 95% uncertainty intervals. As in Figure 7, the black color indicates the main recruitment period. This period with lower standard error is associated with higher variability among deviations (Figure 7). The red line at 0.75 indicates the \(\sigma_R\) value in this model.

Plot of the transformation of the standard error estimates on the y-axis and year on the x-axis produced using r4ss. The red dashed line is the bias adjustment in the model and the blue line is the estimated alternative which is a functional form that minimizes the sume of the squared differences between the bias adjustment function and the transformed standard error values.

Transformation of the standard error estimates (shown in Figure 8) for darkblotched rockfish following the approach suggested by Methot and Taylor (2011). These values were used to set the 5 values controlling the degree of bias adjustment (as a fraction of \(\sigma_R/2\)) to account for differences in the mean and median of the log-normal distribution from which the recruitment deviations are drawn. The red line indicates a bias adjustment of 0 up to the 1960.75, ramping up to a maximum adjustment level of 0.877 for the period 1990.4–2008.98,and reducing back to 0 starting in 2013.08. Note that these values controlling the bias adjustment need not be integer year values. Also, the break points in the bias adjustment function need not match the break points between early, main, and late/forecast recruitment deviation vectors (indicated by blue and black colors in Figures 7 and 8). The blue line indicates a functional form that minimizes the sum of squared differences between the bias adjustment function and the transformed standard error values. The subtle differences between red and blue lines are unlikely to have any appreciable effect on the model results.

A time-series plot produced from SS\_plots in r4ss with spawning depletion on the y-axis and year on the x-axis. Blue points show spawning depletion for recruitment deviations starting in 1870 with the blue shadded area indicating the 95 percent uncertainty intervals for these values. Red points show spawning depletion for recruitment deviations starting in 1960 with the red shadded area indicating the 95 percent uncertainty intervals for these values. The red dashed horizontal line at 0.40 is the management threshold and the red dashed horizontal line is the minimum stock size threshold.

Comparison of time-series of spawning depletion for darkblotched rockfish models with early recruitment deviations (starting in 1870) and without early deviations (only main recruitment deviations starting in 1960). The point estimates are similar, but the 95% uncertainty intervals are substantially different. With no recruitment deviations for the early period, the estimates of spawning depletion in the early years are very precise and uncertainty increases as the stock moves into the data rich period. In contrast, the addition of the early recruitment deviations results in a large uncertainty in spawning depletion for the early years and an increase in precision as the stock moves into the data rich period. In this application, the uncertainty associated with the recent years is independent of the assumptions about early recruitments.

16.2.0.6 Issues with Including Environmental Effects


The expected level of recruitment is a function of spawning biomass, an environmental time series, and a log-bias adjustment. \[E(Recruitment) = f(SpBio) * exp(\beta*envdata) * exp(-0.5*\sigma_R^2)\] \(\sigma_R\) is the variability of the deviations, so it is in addition to the variance “created” by the environmental effect. So, as more of the total recruitment variability is explained by the environmental effect, the residual \(\sigma_R\) should be decreased. The model does not do this automatically.

The environmental effect is inherently log-normal. So when an environmental effect is included in the model, the arithmetic mean recruitment level will be increased above the level predicted by f(SpBio) alone. The consequences of this have not yet been thoroughly investigated, but there probably should be another bias correction based on the variability of the environmental data as scaled by the estimated linkage parameter, \(\beta\). It is also problematic that the environmental effect time series used as input is assumed to be measured without error.

The preferred approach to including environmental effects on recruitment is not to use the environmental effect in the direct calculation of the expected level of recruitment. Instead, the environmental data would be used as if it was a survey observation of the recruitment deviation. This approach is similar to using the environmental index as if it was a survey of age 0 recruitment abundance because by focusing on the fit to the deviations it removes the effect of SpBio on recruitment. In this alternative, the \(\sigma_R\) would not be changed by the environmental data; instead the environmental data would be used to explain some total variability represented by \(\sigma_R\). This approach may also allow greater uncertainty in forecasts, as the variability in projected recruitments would reflect both the uncertainty in the environmental observations themselves and the model fit to these observations.

16.2.0.7 Initial Age Composition


If the first year with recruitment deviations is set less than the start year of the model, then these early deviations will modify the initial age composition. The amount of information on historical recruitment variability certainly will degrade as the model attempts to estimate deviations for older age groups in the initial equilibrium. So the degree of bias correction is reduced linearly in proportion to age so that the correction disappears when maximum age is reached. The initial age composition approach normally produces a result that is indistinguishable from a configuration that starts earlier in the time series and estimates a longer time series of recruitments. However, because the initial equilibrium is calculated from a recruitment level unaffected by spawner-recruitment steepness and initial age composition adjustments are applied after the initial equilibrium is calculated, it is possible that the initial age composition approach will produce a slightly different result than if the time series was started earlier and the deviations were being applied to the recruitment levels predicted from the spawner-recruitment curve.

17 References

Francis, R. I. C. Chris, and Ray Hilborn. 2011. “Data Weighting in Statistical Fisheries Stock Assessment Models.” Canadian Journal of Fisheries and Aquatic Sciences 68 (6): 1124–38. https://doi.org/10.1139/f2011-025.
Gertseva, Vladlena, and Thorson, James T. 2013. “Status of the Darkblotched Rockfish Resource Off the Continental U.S. Pacific Coast in 2013.” Pacific Fishery Management Council, 7700 Ambassador Place NE, Suite 200, Portland, OR 97220.
Grandin, Chris J., Kelli F. Johnson, Andrew M. Edwards, and Aaron M. Berger. 2020. “Status of Pacific Hake (Whiting) Stock in the U.S. And Canadian Waters in 2020.” National Marine Fisheries Service; Fisheries; Ocean Canada: Joint Technical Committe of the Pacific Hake/Whiting Agreement Between the Governments of the United States; Canada.
Gulland, JA. 1987. “Natural Mortality and Size.” Marine Ecology Progress Series 39 (2): 197–99.
Johnson, Kelli F., Elizabeth Councill, James T. Thorson, Elizabeth Brooks, Richard D. Methot, and André E. Punt. 2016. “Can Autocorrelated Recruitment Be Estimated Using Integrated Assessment Models and How Does It Affect Population Forecasts?” Fisheries Research 183 (November): 222–32. https://doi.org/10.1016/j.fishres.2016.06.004.
Lee, H. H., L. R. Thomas, K. R. Piner, and M. N. Maunder. 2017. “Effects of Age-Based Movement on the Estimation of Growth Assuming Random-at-Age or Random-at-Length Data.” Journal of Fish Biology 90 (1): 222–35. https://doi.org/10.1111/jfb.13177.
Lee, Huihua, Kevin R. Piner, Ian G. Taylor, and Toshihide Kitakado. 2019. “On the Use of Conditional Age at Length Data as a Likelihood Component in Integrated Population Dynamics Models.” Fisheries Research 216 (August): 204–11. https://doi.org/10.1016/j.fishres.2019.04.007.
Lehodey, Patrick, Inna Senina, and Raghu Murtugudde. 2008. “A Spatial Ecosystem and Populations Dynamics Model (SEAPODYM)–Modeling of Tuna and Tuna-Like Populations.” Progress in Oceanography 78 (4): 304–18.
Lorenzen, K. 1996. “The Relationship Between Body Weight and Natural Mortality in Juvenile and Adult Fish: A Comparison of Natural Ecosystems and Aquaculture.” Journal of Fish Biology 49 (4): 627–42.
Lorenzen, Kai. 2000. “Allometry of Natural Mortality as a Basis for Assessing Optimal Release Size in Fish-Stocking Programmes.” Canadian Journal of Fisheries and Aquatic Sciences 57 (12): 2374–81.
Maunder, Mark N. 2011. “Proposed Formulation for Age-Specific Patterns in Natural Mortality.” In Estimating Natural Mortality in Stock Assessment Applications, edited by J Brodziak, J. Ianelli, K. Lorenzen, and Jr. Methot R. D., 38. Silver Spring, Maryland, USA: U.S. Department of Commerce, NOAA Tech. Memo.
Maunder, Mark N, Alexandre Aires-da-Silva, Richard Deriso, Kurt Schaefer, and Daniel Fuller. 2010. “Preliminary Estimation of Age-and Sex-Specific Natural Mortality of Bigeye Tuna in the Eastern Pacific Ocean by Applying a Cohort Analysis with Auxiliary Information to Tagging Data.” Inter-Amer. Trop. Tuna Comm., Stock Assessment Report 10: 253–78.
Maunder, Mark N., Richard B. Deriso, Kurt M. Schaefer, Daniel W. Fuller, Alexandre M. Aires-da-Silva, Carolina V. Minte-Vera, and Steven E. Campana. 2018. “The Growth Cessation Model: A Growth Model for Species Showing a Near Cessation in Growth with Application to Bigeye Tuna (Thunnus Obesus).” Marine Biology 165 (4). https://doi.org/10.1007/s00227-018-3336-9.
Maunder, Mark N, and Richard B Deriso. 2003. “Estimation of Recruitment in Catch-at-Age Models.” Canadian Journal of Fisheries and Aquatic Sciences 60 (10): 1204–16. https://doi.org/10.1139/f03-104.
McAllister, M. K., and J. N. Ianelli. 1997. “Bayesian Stock Assessment Using Catch-Age Data and the Sampling - Importance Resampling Algorithm.” Canadian Journal of Fisheries and Aquatic Sciences 54: 284–300.
Methot, Richard D., and Ian G. Taylor. 2011. “Adjusting for Bias Due to Variability of Estimated Recruitments in Fishery Assessment Models.” Canadian Journal of Fisheries and Aquatic Sciences 68 (10): 1744–60. https://doi.org/10.1139/f2011-092.
Methot, Richard D., and Chantell R. Wetzel. 2013. “Stock Synthesis: A Biological and Statistical Framework for Fish Stock Assessment and Fishery Management.” Fisheries Research 142 (May): 86–99. https://doi.org/10.1016/j.fishres.2012.10.012.
Monnahan, Cole C, Trevor A Branch, James T Thorson, Ian J Stewart, and Cody S Szuwalski. 2019. “Overcoming Long Bayesian Run Times in Integrated Fisheries Stock Assessments.” ICES Journal of Marine Science 76 (6): 1477–88. https://doi.org/10.1093/icesjms/fsz059.
Piner, Kevin R., Hui-Hua Lee, and Mark N. Maunder. 2016. “Evaluation of Using Random-at-Length Observations and an Equilibrium Approximation of the Population Age Structure in Fitting the von Bertalanffy Growth Function.” Fisheries Research 180 (August): 128–37. https://doi.org/10.1016/j.fishres.2015.05.024.
Punt, André E. 2016. “Some Insights into Data Weighting in Integrated Stock Assessments.” Fisheries Research 192 (January): 52–65. https://doi.org/10.1016/j.fishres.2015.12.006.
Punt, André E., and Jason M. Cope. 2019. “Extending Integrated Stock Assessment Models to Use Non-Depensatory Three-Parameter Stock-Recruitment Relationships.” Fisheries Research 217: 46–57. https://doi.org/10.1016/j.fishres.2017.07.007.
Richards, F. J. 1959. “A Flexible Growth Function for Empirical Use.” Journal of Experimental Botany 10 (29): 290–300.
Schnute, Jon. 1981. “A Versatile Growth Model with Statistically Stable Parameters.” Canadian Journal of Fisheries and Aquatic Science 38: 1128–40.
Taylor, Ian G., Vladlena Gertseva, Richard D. Methot, and Mark N. Maunder. 2013. “A Stock-Recruitment Relationship Based on Pre-Recruit Survival, Illustrated with Application to Spiny Dogfish Shark.” Fisheries Research 142 (May): 15–21. https://doi.org/10.1016/j.fishres.2012.04.018.
Then, Amy Y, John M Hoenig, Norman G Hall, David A Hewitt, and Handling editor: Ernesto Jardim. 2015. “Evaluating the Predictive Performance of Empirical Estimators of Natural Mortality Rate Using Information on over 200 Fish Species.” ICES Journal of Marine Science 72 (1): 82–92.
Thompson, Grant G. 1994. “Confounding of Gear Selectivity and the Natural Mortality Rate in Cases Where the Former Is a Nonmonotone Function of Age.” Canadian Journal of Fisheries and Aquatic Sciences 51 (12): 2654–64. https://doi.org/10.1139/f94-265.
Thorson, James T., Kelli F. Johnson, Richard D. Methot, and Ian G. Taylor. 2017. “Model-Based Estimates of Effective Sample Size in Stock Assessment Models Using the Dirichlet-Multinomial Distribution.” Fisheries Research 192 (August): 84–93. https://doi.org/10.1016/j.fishres.2016.06.005.
Xu, Haikun, James T. Thorson, Richard D. Methot, and Ian G. Taylor. 2019. “A New Semi-Parametric Method for Autocorrelated Age- and Time-Varying Selectivity in Age-Structured Assessment Models.” Canadian Journal of Fisheries and Aquatic Sciences 76 (2): 268–85. https://doi.org/10.1139/cjfas-2017-0446.
Xu, Haikun, James T Thorson, and Richard D Methot. 2020. “Comparing the Performance of Three Data-Weighting Methods When Allowing for Time-Varying Selectivity.” Canadian Journal of Fisheries and Aquatic Sciences 77 (2): 247–63. https://doi.org/10.1139/cjfas-2019-0107.