AMA Manual of Style - Stacy L. Christiansen, Cheryl Iverson 2020
The Manuscript: Presenting Study Design, Rationale, and Statistical Analysis
Study Design and Statistics
The essence of life is statistical improbability on a grand scale.
There are three kinds of lies: lies, damn lies, and statistics.
Attributed to Disraeli by Mark Twain1
In medical and health research, the quality of the statistical analysis and the presentation of results are critical to a study’s validity and influence. Decisions about statistical analysis are best made when studies are designed and should not occur after the data are collected. A fundamentally flawed study cannot be salvaged by statistical analysis. Regardless of the statistician’s role, authors are responsible for the appropriate design, analysis, and presentation of the study’s results.
Authors, journal editors, and manuscript editors of research manuscripts should have a general understanding of study designs, statistical terms and concepts, and the use of statistical tests and presentation. Statistical analyses should be completely summarized using brief but consistent language that will improve the reader’s understanding of the analysis.
19.1 The Manuscript: Presenting Study Design, Rationale, and Statistical Analysis.
Each portion of the manuscript should facilitate the reader’s understanding of why and how the study was done and (1) clearly state a hypothesis or study question, (2) show that the methods adequately answer the research question and that the data were appropriately analyzed, (3) convince the reader that the results are valid and credible, and (4) place the implications of the research in context and show that the study limitations do not preclude interpretation of the results.
A different font is used in this chapter to denote words defined in the glossary (see 19.5, Glossary of Statistical Terms).
Many readers will read only the abstract of a research article, so it should include as precise a summary of the content as possible. In addition, because readers may decide to review the entire article based on information in the abstract, it should be well written and carefully constructed.
By imposing order on how material is presented, a structured abstract helps writers and readers systematically evaluate information. A structured abstract is one that contains specific subsections and headings.
✵ ■The structured abstract should enable the reader to quickly and easily identify the study type, hypothesis or question, and methods.2
■The study question or the hypothesis (objective) should be clearly stated (eg, “To determine whether enalapril reduces left ventricular mass . . . ”); the study design, population, and setting from which the sample was drawn should be described; and the main outcome measures should be explained. The design or study type should be indicated (eg, randomized clinical trial, cohort study, meta-analysis).
■Study design and interventions should be specified.
■The results should include a brief description of the study participants or data included, and data should be presented in absolute numbers and some explanation of effect size, if appropriate, with point estimates, confidence intervals, and measures of statistical significance presented.
■The conclusions should follow from the results without overinterpreting the findings. A relevance statement should place the research findings in the context of the overall problem the research addresses and describe how the research findings might influence change or be used.
Abstracts are too brief for detailed explanation of statistical analyses, but a basic description may be appropriate (eg, “The screening test was validated by means of a bootstrap method and performance tested with a receiver operating characteristic curve.”) (see 2.5, Abstract).
For clinical trials that have been registered in an appropriate public trial registry, the name of the registry and trial identification number should be provided at the end of the abstract (see 19.2, Randomized Clinical Trials).
The following is an example structured abstract for an observational study3:
Importance The Third International Consensus Definitions Task Force defined sepsis as “life-threatening organ dysfunction due to a dysregulated host response to infection.” The performance of clinical criteria for this sepsis definition is unknown.
Objective To determine the validity of clinical criteria to identify patients with suspected infection who are at risk for sepsis.
Design, Setting, and Population In a cohort study of 1.3 million electronic health record encounters from January 1, 2010, to December 31, 2012, at 12 hospitals in southwestern Pennsylvania, those with suspected infection in whom to compare criteria were identified. Confirmatory analyses were performed in 4 data sets of 706 399 out-of-hospital and hospital encounters at 165 US and non-US hospitals ranging from January 1, 2008, until December 31, 2013.
Exposures Sequential [Sepsis-related] Organ Failure Assessment (SOFA) score, systemic inflammatory response syndrome (SIRS) criteria, Logistic Organ Dysfunction System (LODS) score, and a new model derived using multivariable logistic regression in a split sample, the quick Sequential [Sepsis-related] Organ Failure Assessment (qSOFA) score (range, 0-3 points, with 1 point each for systolic hypotension [blood pressure ≤100 mm Hg], tachypnea [heart rate ≥22/min], or altered mentation).
Main Outcomes and Measures For construct validity, pairwise agreement was assessed. For predictive validity, the discrimination for outcomes (primary: in-hospital mortality; secondary: in-hospital mortality or intensive care unit [ICU] length of stay ≥3 days) more common in sepsis than uncomplicated infection was determined. Results were expressed as the fold change in outcome over deciles of baseline risk of death and area under the receiver operating characteristic curve (AUROC).
Results In the primary cohort, 148 907 encounters had suspected infection (n = 74 453 derivation; n = 74 454 validation) (mean [SD] age, 61  years, 85 563 women [57%]) of whom 6347 (4%) died. Among ICU encounters in the validation cohort (n = 7932 with suspected infection, of whom 1289 [16%] died), the predictive validity for in-hospital mortality was lower for SIRS (AUROC = 0.64; 95% CI, 0.62-0.66) and qSOFA (AUROC = 0.66; 95% CI, 0.64-0.68) vs SOFA (AUROC = 0.74; 95% CI, 0.73-0.76; P < .001 for both) or LODS (AUROC = 0.75; 95% CI, 0.73-0.76; P < .001 for both). Among non-ICU encounters in the validation cohort (n = 66 522 with suspected infection, of whom 1886 [3%] died), qSOFA had predictive validity (AUROC = 0.81; 95% CI, 0.80-0.82) that was greater than SOFA (AUROC = 0.79; 95% CI, 0.78-0.80; P < .001) and SIRS (AUROC = 0.76; 95% CI, 0.75-0.77; P < .001). Relative to qSOFA scores lower than 2, encounters with qSOFA scores of 2 or higher had a 3- to 14-fold increase in hospital mortality across baseline risk deciles. Findings were similar in external data sets and for the secondary outcome.
Conclusions and Relevance Among ICU encounters with suspected infection, the predictive validity for in-hospital mortality of SOFA was not significantly different than the more complex LODS but was statistically greater than SIRS and qSOFA, supporting its use in clinical criteria for sepsis. Among encounters with suspected infection outside the ICU, the predictive validity for in-hospital mortality of qSOFA was statistically greater than SOFA and SIRS, supporting its use as a prompt to consider possible sepsis.
From JAMA. 2016;315(8):762-774. doi:10.1001/jama.2016.0288
In the Introduction, briefly review the literature that documents the nature and importance of the research and rationale for the study. An extended full literature review belongs in the Discussion section. The study hypothesis or research question should be clearly stated in the last sentence(s) of the Introduction before the Methods section, preferably including the word hypothesis (or question). Introductions should be concise, generally 150 to 350 words, and should avoid reciting information about a topic that is generally known by readers (eg, “The incidences of obesity and diabetes are increasing in parallel throughout the world.”).
Results or conclusions do not belong in the Introduction section of a manuscript.
The Methods section should include enough information to enable a knowledgeable reader to replicate the study and, given the original data, verify the reported results. Analyses should follow the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network reporting guidelines4 and be consistent with the study protocol and statistical analysis plan or described as post hoc. Components should include as many of the following as are applicable to the study design:
■Study design (see 19.2, Randomized Clinical Trials; 19.3, Observational Studies; and 19.4, Significant Digits and Rounding Numbers).
■Year(s) (and exact dates, if appropriate) when the study was conducted or data were collected and when the data were analyzed.
■Disease or condition to be studied—how was it defined? State the specific case definition if there was one. If measurements were used to define cases, state what these were and what values were used to establish a diagnosis (eg, “Patients were diagnosed as having a myocardial infarction if their serum troponin level was more than 0.4 ng/mL.”).
■Setting in which participants were studied (eg, community based, referral population, primary care clinic), as well as geographic location and, if applicable, name of institution(s).
■Type of research participants or other data studied. Who or what was eligible for inclusion in the study and who was excluded. Specify the inclusion criteria or exclusion criteria. If all participants were not included in each analysis, the reason for exclusions should be stated. If the methods or the results have been previously reported, provide citations for all reports or ensure that different reports of the same study can be easily identified (eg, by using a unique study name).
■For all studies (except meta-analyses), information about review and approval or waiver by institutional review board or ethics committee should be detailed when appropriate (see 5.8.1, Ethical Review of Studies and Informed Consent). For studies that involve human participants, the method used to obtain informed consent should be reported, as well as whether consent was written or oral. Describe whether compensation or other incentives were provided to study participants.
■Intervention(s) or Exposure(s), including their duration. In general, sufficient detail should be provided to enable other investigators to replicate the interventions (including where to obtain the full study protocol) and to facilitate comparison with other studies. Treatments administered to or exposures experienced by control or comparison groups should also be described in detail.
■Ideally, there should be only 1 primary outcome variable, although on occasion there may be more than 1. All primary outcome variables should be specified. The primary outcome variable is the variable used to determine the study sample size. All other outcomes (such as prespecified secondary outcomes) and how they were measured should be described. The reliability of measures and whether investigators determining outcomes were blinded to which group received the intervention or underwent the exposure should also be provided. Because the terms double-blinding or triple-blinding have no specific definitions, authors should specifically identify each blinded group and how blinding was achieved (eg, when drugs are administered intravenously and the drugs have different colors, that the intravenous tubing was covered with foil).
■The Methods section should also describe what all the other variables were and how they were measured; for example, demographic variables and disease risk factors should be specified. These variables are often used to assess or adjust for confounding of the relationship or association between the dependent variable and independent variables. The unit of analysis should be explicitly stated.
■Preliminary analyses: if the study is a preliminary analysis of an ongoing study, the reason for publishing data before the end of the study should be clearly stated, along with information regarding whether and when the study is to be completed. Authors should indicate whether such analyses were preplanned at the time the study began.5
■Data to be analyzed in the study should be sufficiently described. For clinical trials, a data sharing statement is required. Data sharing statements can be provided for other study types as well (see 220.127.116.11, Data Sharing, Deposit, and Access Requirements of Journals).
■Sources of data not collected directly by the authors should be reported in the Methods section. For example, national census data may be used to calculate incidence rates. Database repositories or websites can be used to store or display data or information that could not be included in the manuscript, and they should be made publicly available, if possible. Information essential to the conduct of the study or its interpretation should be included in the manuscript. Authors should consider providing this information in an online supplement to the manuscript or depositing the relevant data in a publicly accessible repository (see 2.10.17, Additional Contributions).
■The end of the Methods section should include a Statistical Analysis subsection that describes all statistical methods, including procedures used for each analysis, all statistical tests used, and the statistical software, programs, modules, and versions of all these used to perform the statistical analysis. Define statistical terms, abbreviations, and symbols, if included. Tests used to calculate point estimates and CIs or other measures of variance should be described. The α level used to determine statistical significance should be specified, as should whether the tests were 1- or 2-sided. If analysis code is included, it should be placed in the online supplementary content.
The power of the study (which should have been calculated before the study was conducted to determine sample size) should be reported, as should assumptions made to calculate the study power. Data used to calculate power (eg, expected mean values for measurements and the SD) should be provided, along with a reference to prior publications relied on to make these assumptions. Authors should explicitly state why the minimal clinically important difference used for the study power calculation is assumed to be clinically relevant.
How and why data were transformed should be reported. Note that skewed data alone do not necessarily require transformation; transformation is usually required when the residual error is skewed. If multiple comparisons were performed, authors should specify how significance tests were adjusted to account for them. The exact steps used for developing models in multivariable analysis and pertinent references for statistical tests should also be specified. Detailed reporting of model fit statistics should be provided. In general, how well variation in the outcomes variable is explained by the model should be reported. Examples include reporting R2 or pseudo-R2 values (R is italicized; see 21.9.4 Italics). Authors should list the assumptions underlying the use of all statistical tests and note how these assumptions were met. Test statistics should include degrees of freedom whenever applicable. It is always preferable for results to be presented in terms of point estimates and confidence intervals, which convey more information than do P values alone. Do not rely on hypothesis testing alone. If P values are to be reported for simple descriptive comparisons, the comparative data should always be reported along with the P values.
Procedures used for managing outliers, managing loss to or unavailability for follow-up, and modeling missing data should be specified. How outlying values were identified and treated should also be disclosed.
The Results section should begin with a brief description of the study participants or data assessed, with demographic information (eg, participants, mean [SD] age, and number [percentage] of female or male participants).
In the reporting of results, when possible, quantify findings and present them with appropriate indicators of measurement error or uncertainty, such as CIs. Authors should avoid relying solely on statistical hypothesis testing, such as the use of P values, which fails to convey important quantitative information. For most studies, P values should follow the reporting of comparisons of absolute numbers or rates and measures of uncertainty (eg, 0.8%, 95% CI, −0.2% to 1.8%; P = .13). P values should never be presented alone without the data that are being compared. If P values are reported, follow standard conventions for decimal places: for P values less than .001, report as P < .001; for P values between .001 and .01, report the value to the nearest thousandth; for P values greater than or equal to .10, report the value to the nearest hundredth; and for P values greater than .99, report as P > .99. For studies with exponentially small P values (eg, genetic association studies), P values may be reported with exponents (eg, P = 1 × 10−5).
Authors should provide the numbers of observations in observational studies and the numbers randomized in randomized clinical trials. For randomized trials, provide the numbers randomly assigned. Losses to or unavailability for observation or follow-up should be reported. For multivariable models, all variables included in final models should be reported, as should model diagnostics and overall fit of the model when available.
Authors should avoid nontechnical uses of technical terms in statistics, such as correlation, normal, predictor, random, sample, significant, and trend. Inappropriate hedge terms, such as marginal significance or trend toward significance, for results that are not statistically significant should not be used. For observational studies, methods and results should be described in terms of association or correlation and should avoid cause-and-effect wording. Randomized trials may use terms such as effect and causal relationship. Also avoid uninformative modifiers of the strength of a finding, such as “strong association.” If the association is statistically significant, use “significant association.”
To give readers a sense of how the study population represents the overall population, the Results section should include the number of individuals (or other data units) initially screened for study inclusion, the number who were eligible for study entry, and the number who were excluded, had dropped out, or were lost to or unavailable for follow-up at each point in the study. For example, the JAMA Network journals require a figure that shows the flow of participants through clinical trials (see 19.2, Randomized Clinical Trials) or the studies included in a meta-analysis. The completeness of follow-up should be explicitly stated, as should the results of any missing data and outlier analysis. In general, multiple imputation is the preferred method for modeling missing data. Authors should provide a table of descriptive statistics about the sample and, if appropriate, the individual subgroups. Primary outcome measures should be discussed after the study population is described, followed by secondary outcome measures. Prespecified secondary outcomes should be reported after the primary outcome variable(s). If any prespecified outcomes are not being reported, explain why and if they have been or will be reported elsewhere.
Post hoc analyses may be presented, but they should be identified as such. Results of post hoc analyses may be unreliable, and thus such analyses should be used for generating rather than testing hypotheses (see type I error).
If 1 statistical test has been used throughout the manuscript, the test should be clearly stated in the Methods section. If more than 1 statistical test is used, the various statistical tests performed should be discussed in the Methods section and the specific test used reported along with the corresponding results. Tests of relative results (eg, relative risk, odds ratio) may overstate the real magnitude of differences between groups, particularly when absolute magnitudes are small. Thus, when presenting relative results, authors should also report an absolute difference along with a measure of the central tendency of the groups (eg, mean or median) and appropriate CIs.
Results should not be displayed only in a figure. Figures optimally display patterns and trends, but tables give exact values. Numerical results from a study’s primary outcome variable and important secondary variables should be reported in a table, with the most important findings described in the text of the Results section.
Authors should address whether the hypothesis was supported or refuted by the study results or how the study question was answered. The study result should be placed in the context of published literature. The limitations of the study should be discussed, especially possible sources of bias and how these problems might affect conclusions and generalizability. Evidence to support or refute the problems introduced by the limitations should be provided. Sometimes, study limitations are presented in a separate subsection of the Discussion section. The implications for clinical practice, if any, and specific directions for future research may be offered (avoid generic uninformative recommendations, such as “future research is needed”; provide specific research recommendations). The conclusions should not go beyond the data and should be based on the study results and limited to the specific population represented by the study sample. The relevance for the findings as they pertain to clinical practice or the state of the science should be discussed.
One general approach to writing a Discussion section includes 7 parts:
1.Briefly summarize the study and the main results in a paragraph or two. Be sure to answer the research question posed in the introduction.
2.Interpret the results and suggest an explanation for them.
3.Describe how the results compare with what else is known about the problem; review the literature and put the results in context.
4.Suggest how the results might be generalized.
5.Discuss the implication of the results.
6.Under a separate subheading (Strengths and Limitations), describe the strengths and limitations of the study, their possible effects on the results, and, if possible, the steps taken to minimize their effects.
7.Under a separate subheading (Conclusions), report the conclusions.