Statistics: Definition, Types, and Importance

What Is Statistics?

Statistics is a branch of applied mathematics that involves the collection, description, analysis, and inference of conclusions from quantitative data. The mathematical theories behind statistics rely heavily on differential and integral calculus, linear algebra, and probability theory.

People who do statistics are referred to as statisticians. They’re particularly concerned with determining how to draw reliable conclusions about large groups and general events from the behavior and other observable characteristics of small samples. These small samples represent a portion of the large group or a limited number of instances of a general phenomenon.

Key Takeaways

  • Statistics is the study and manipulation of data, including ways to gather, review, analyze, and draw conclusions from data.
  • The two major areas of statistics are descriptive and inferential statistics.
  • Statistics can be communicated at different levels ranging from non-numerical descriptor (nominal-level) to numerical in reference to a zero-point (ratio-level).
  • Several sampling techniques can be used to compile statistical data, including simple random, systematic, stratified, or cluster sampling.
  • Statistics are present in almost every department of every company and are an integral part of investing.
Statistics

Dennis Madamba / Investopedia

Understanding Statistics

Statistics are used in virtually all scientific disciplines, such as the physical and social sciences as well as in business, the humanities, government, and manufacturing. Statistics is fundamentally a branch of applied mathematics that developed from the application of mathematical tools, including calculus and linear algebra, to probability theory.

In practice, statistics is the idea that we can learn about the properties of large sets of objects or events (a population) by studying the characteristics of a smaller number of similar objects or events (a sample). Gathering comprehensive data about an entire population is too costly, difficult, or impossible in many cases, so statistics start with a sample that can be conveniently or affordably observed.

Statisticians measure and gather data about the individuals or elements of a sample, then they analyze this data to generate descriptive statistics. They can then use these observed characteristics of the sample data, which are properly called “statistics,” to make inferences or educated guesses about the unmeasured characteristics of the broader population, known as the parameters.

Statistics informally dates back centuries. An early record of correspondence between French mathematicians Pierre de Fermat and Blaise Pascal in 1654 is often cited as an early example of statistical probability analysis.

Descriptive and Inferential Statistics

The two major areas of statistics are known as descriptive statistics, which describes the properties of sample and population data, and inferential statistics, which uses those properties to test hypotheses and draw conclusions. Descriptive statistics include mean (average), variance, skewness, and kurtosis. Inferential statistics include linear regression analysis, analysis of variance (ANOVA), logit/Probit models, and null hypothesis testing.

Descriptive Statistics

Descriptive statistics mostly focus on the central tendency, variability, and distribution of sample data. Central tendency means the estimate of the characteristics, a typical element of a sample or population. It includes descriptive statistics such as mean, median, and mode.

Variability refers to a set of statistics that show how much difference there is among the elements of a sample or population along the characteristics measured. It includes metrics such as range, variance, and standard deviation.

The distribution refers to the overall “shape” of the data, which can be depicted on a chart such as a histogram or a dot plot, and includes properties such as the probability distribution function, skewness, and kurtosis. Descriptive statistics can also describe differences between observed characteristics of the elements of a data set. They can help us understand the collective properties of the elements of a data sample and form the basis for testing hypotheses and making predictions using inferential statistics.

Inferential Statistics

Inferential statistics is a tool that statisticians use to draw conclusions about the characteristics of a population, drawn from the characteristics of a sample, and to determine how certain they can be of the reliability of those conclusions. Based on the sample size and distribution, statisticians can calculate the probability that statistics, which measure the central tendency, variability, distribution, and relationships between characteristics within a data sample, provide an accurate picture of the corresponding parameters of the whole population from which the sample is drawn.

Inferential statistics are used to make generalizations about large groups, such as estimating average demand for a product by surveying a sample of consumers’ buying habits or attempting to predict future events. This might mean projecting the future return of a security or asset class based on returns in a sample period.

Regression analysis is a widely used technique of statistical inference used to determine the strength and nature of the relationship (the correlation) between a dependent variable and one or more explanatory (independent) variables. The output of a regression model is often analyzed for statistical significance, which refers to the claim that a result from findings generated by testing or experimentation is not likely to have occurred randomly or by chance. It’s likely to be attributable to a specific cause elucidated by the data.

Having statistical significance is important for academic disciplines or practitioners that rely heavily on analyzing data and research.

Mean, Median, and Mode

The terms “mean,” “median,” and “mode” fall under the umbrella of central tendency. They describe an element that’s typical in a given sample group. You can find the mean descriptor by adding the numbers in the group and dividing the result by the number of data set observations.

The middle number in the set is the median. Half of all included numbers are higher than the median, and half are lesser. The median home value in a neighborhood would be $350,000 if five homes were located there and valued at $500,000, $400,000, $350,000, $325,000, and $300,000. Two values are higher, and two are lower.

Mode identifies the number that falls between the highest and lowest values. It appears most frequently in the data set.

Understanding Statistical Data

The root of statistics is driven by variables. A variable is a data set that can be counted that marks a characteristic or attribute of an item. For example, a car can have variables such as make, model, year, mileage, color, or condition. By combining the variables across a set of data, such as the colors of all cars in a given parking lot, statistics allows us to better understand trends and outcomes.

There are two main types of variables:

First, qualitative variables are specific attributes that are often non-numeric. Many of the examples given in the car example are qualitative. Other examples of qualitative variables in statistics are gender, eye color, or city of birth. Qualitative data is most often used to determine what percentage of an outcome occurs for any given qualitative variable. Qualitative analysis often does not rely on numbers. For example, trying to determine what percentage of women own a business analyzes qualitative data.

The second type of variable in statistics is quantitative variables. Quantitative variables are studied numerically and only have weight when they’re about a non-numerical descriptor. Similar to quantitative analysis, this information is rooted in numbers. In the car example above, the mileage driven is a quantitative variable, but the number 60,000 holds no value unless it is understood that is the total number of miles driven.

Quantitative variables can be further broken into two categories. First, discrete variables have limitations in statistics and infer that there are gaps between potential discrete variable values. The number of points scored in a football game is a discrete variable because:

  1. There can be no decimals.
  2. It is impossible for a team to score only one point.

Statistics also makes use of continuous quantitative variables. These values run along a scale. Discrete values have limitations, but continuous variables are often measured into decimals. Any value within possible limits can be obtained when measuring the height of the football players, and the heights can be measured down to 1/16th of an inch, if not further.

Statisticians can hold various titles and positions within a company. The average total compensation for a statistician with one to three years of experience was $81,885 as of December 2023. This increased to $109,288 with 15 years of experience.

Statistical Levels of Measurement

There are several resulting levels of measurement after analyzing variables and outcomes. Statistics can quantify outcomes in four ways.

Nominal-level Measurement

There’s no numerical or quantitative value, and qualities are not ranked. Nominal-level measurements are instead simply labels or categories assigned to other variables. It’s easiest to think of nominal-level measurements as non-numerical facts about a variable.

Example: The name of the president elected in 2020 was Joseph Robinette Biden Jr.

Ordinal-level Measurement

Outcomes can be arranged in an order, but all data values have the same value or weight. Although they’re numerical, ordinal-level measurements can’t be subtracted against each other in statistics because only the position of the data point matters. Ordinal levels are often incorporated into nonparametric statistics and compared against the total variable group.

Example: American Fred Kerley was the second-fastest man at the 2020 Tokyo Olympics based on 100-meter sprint times.

Interval-level Measurement

Outcomes can be arranged in order, but differences between data values may now have meaning. Two data points are often used to compare the passing of time or changing conditions within a data set. There is often no “starting point” for the range of data values, and calendar dates or temperatures may not have a meaningful intrinsic zero value.

Example: Inflation hit 8.6% in May 2022. The last time inflation was this high was in December 1981.

Ratio-level Measurement

Outcomes can be arranged in order, and differences between data values now have meaning. But there’s a starting point or “zero value” that can be used to further provide value to a statistical value. The ratio between data values has meaning, including its distance away from zero.

Example: The lowest meteorological temperature recorded was -128.6 degrees Fahrenheit in Antarctica.

Statistics Sampling Techniques

It would often not be possible to gather data from every data point within a population to gather statistical information. Statistics relies instead on different sampling techniques to create a representative subset of the population that’s easier to analyze. In statistics, there are several primary types of sampling.

Simple Random Sampling

Simple random sampling calls for every member within the population to have an equal chance of being selected for analysis. The entire population is used as the basis for sampling, and any random generator based on chance can select the sample items. For example, 100 individuals are lined up and 10 are chosen at random.

Systemic Sampling

Systematic sampling calls for a random sample as well, but its technique is slightly modified to make it easier to conduct. A single random number is generated, and individuals are then selected at a specified regular interval until the sample size is complete. For example, 100 individuals are lined up and numbered. The seventh individual is selected for the sample, followed by every subsequent ninth individual, until 10 sample items have been selected.

Stratified Sampling

Stratified sampling calls for more control over your sample. The population is divided into subgroups based on similar characteristics. Then you calculate how many people from each subgroup would represent the entire population. For example, 100 individuals are grouped by gender and race. Then a sample from each subgroup is taken in proportion to how representative that subgroup is of the population.

Cluster Sampling

Cluster sampling calls for subgroups as well, but each subgroup should be representative of the population. The entire subgroup is randomly selected instead of randomly selecting individuals within a subgroup.

Not sure which Major League Baseball player should have won Most Valuable Player last year? Statistics, often used to determine value, is often cited when the award for best player is awarded. Statistics can include batting average, number of home runs hit, and stolen bases.

Uses of Statistics

Statistics is prominent in finance, investing, business, and in the world. Much of the information you see and the data you’re given is derived from statistics, which are used in all facets of a business.

  • Statistics in investing include average trading volume, 52-week low, 52-week high, beta, and correlation between asset classes or securities.
  • Statistics in economics include gross domestic product (GDP), unemployment, consumer pricing, inflation, and other economic growth metrics.
  • Statistics in marketing include conversion rates, click-through rates, search quantities, and social media metrics.
  • Statistics in accounting include liquidity, solvency, and profitability metrics across time.
  • Statistics in information technology include bandwidth, network capabilities, and hardware logistics.
  • Statistics in human resources include employee turnover, employee satisfaction, and average compensation relative to the market.

Why Is Statistics Important?

Statistics provide the information to educate how things work. They’re used to conduct research, evaluate outcomes, develop critical thinking, and make informed decisions. Statistics can be used to inquire about almost any field of study to investigate why things happen, when they occur, and whether reoccurrence is predictable.

What’s the Difference Between Descriptive and Inferential Statistics?

Descriptive statistics are used to describe or summarize the characteristics of a sample or data set, such as a variable’s mean, standard deviation, or frequency. Inferential statistics employ any number of techniques to relate variables in a data set to one another. An example would be using correlation or regression analysis. These can then be used to estimate forecasts or infer causality.

Who Uses Statistics?

Statistics are used widely across an array of applications and professions. Statistics are done whenever data are collected and analyzed. This can range from government agencies to academic research to analyzing investments.

How Are Statistics Used in Economics and Finance?

Economists collect and look at all sorts of data ranging from consumer spending to housing starts to inflation to GDP growth. In finance, analysts and investors collect data about companies, industries, sentiment, and market data on price and volume. The use of inferential statistics in these fields is known as econometrics. Several important financial models, from the capital asset pricing model (CAPM) to modern portfolio theory (MPT) and the Black-Scholes options pricing model, rely on statistical inference.

The Bottom Line

Statistics is the practice of analyzing pieces of information that might seem conflicting or unrelated at first glance and on the surface. It can lead to a solid career as a statistician, but it can also be a handy metric in everyday life—perhaps when you’re analyzing the odds that your favorite team will win the Super Bowl before you place a bet, gauging the viability of an investment, or determining whether you’re being comparatively overcharged for a product or service.

Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
  1. Encyclopœdia Britannica. “Probability and Statistics.”

  2. Coursera. “How Much Do Statisticians Make? Your 2024 Salary Guide.”

  3. Olympics. “Tokyo 2020: Athletics Men’s 100m Results.”

  4. U.S. Bureau of Labor Statistics. “Consumer Price Index.”

  5. Arizona State University, World Meteorological Organization’s World Weather & Climate Extremes Archive. “World: Lowest Temperature.”

  6. Baseball Reference. “MLB Most Valuable Player MVP Award Winners.”

Open a New Bank Account
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
Sponsor
Name
Description
Open a New Bank Account
×
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.