The capacity of a lithium-ion battery is calculated by multiplying the discharge current (in amperes) by the time (in hours) over which that current flows, giving a value in ampere-hours (Ah) or milliampere-hours (mAh). For battery materials researchers, understanding how to calculate battery capacity accurately—and why measured values differ from theoretical ones—is fundamental to interpreting experimental results and comparing electrode materials.
This article addresses each core question in sequence, from the basic definition of lithium-ion battery capacity through to the practical considerations that determine how reliably you can measure it in a laboratory setting.
What is the capacity of a lithium-ion battery?
The capacity of a lithium-ion battery is the total charge it can store and deliver, expressed in mAh (milliampere-hours) for cells or in mAh/g (specific capacity) when normalising to the mass of the active material. It represents the quantity of lithium ions that can be reversibly intercalated or stored within the electrode materials during a full charge or discharge cycle.
In research contexts, capacity is almost always reported as specific capacity in mAh/g, which allows meaningful comparison between different electrode materials regardless of the absolute mass of active material used in a given cell. When evaluating full cells or commercial formats, gravimetric energy density (Wh/kg) or volumetric energy density (Wh/L) are more relevant metrics, but these are derived quantities that depend on capacity as their foundation.
How do you calculate the theoretical capacity of a lithium-ion battery?
The theoretical capacity of a lithium-ion battery material is calculated using the formula Q = (n × F) / M, where n is the number of electrons transferred per formula unit, F is Faraday’s constant (96,485 C/mol), and M is the molar mass of the active material in g/mol. The result in C/g is then converted to mAh/g by dividing by 3.6.
Applying the battery capacity formula to common materials
For graphite (LiC6), one lithium ion is stored per six carbon atoms. With a molar mass of 72 g/mol for LiC6 and n = 1, the theoretical specific capacity works out to approximately 372 mAh/g. For lithium iron phosphate (LiFePO4), with a molar mass of approximately 158 g/mol and n = 1, the theoretical capacity is approximately 170 mAh/g.
These theoretical values assume complete, reversible utilisation of every available lithium site within the crystal structure—an idealised condition that is rarely achieved in practice. The formula nonetheless provides an essential benchmark against which experimental results can be evaluated.
What is the difference between capacity in mAh and energy in Wh?
Capacity in mAh measures the total charge a cell can deliver, while energy in Wh (watt-hours) measures the actual work that charge can perform. The two are related by voltage: Energy (Wh) = Capacity (Ah) × Average Voltage (V). A cell with 1,000 mAh capacity at an average discharge voltage of 3.6 V delivers 3.6 Wh of energy.
This distinction matters in battery research because two electrode materials may have similar specific capacities in mAh/g but very different energy densities if their operating voltages differ significantly. Reporting capacity alone is insufficient when comparing materials for energy storage applications—the full discharge voltage profile must be considered to calculate the specific energy in Wh/kg.
Why is measured capacity lower than theoretical capacity?
Measured capacity is lower than theoretical capacity because not all lithium sites within the active material are accessible under real electrochemical conditions. Several mechanisms contribute to this discrepancy, including incomplete lithium utilisation, irreversible side reactions, and structural or kinetic limitations within the electrode.
Key factors that reduce accessible capacity
- Solid Electrolyte Interphase (SEI) formation: During the first cycles, the SEI layer forms on the anode surface, consuming lithium irreversibly and reducing the charge available for subsequent cycling. This is the primary source of first-cycle capacity loss.
- Particle disconnection: Volume changes during lithiation and delithiation can cause active material particles to lose electrical contact with the current collector or conductive additive.
- Electrolyte decomposition: Continued side reactions at electrode surfaces consume lithium and generate resistive surface films over repeated cycles.
- Structural degradation: Phase transitions and lattice strain in cathode materials can block lithium diffusion pathways over time.
Coulombic efficiency—the ratio of charge extracted during discharge to charge inserted during charge—is the standard metric for tracking these losses cycle by cycle. High, stable coulombic efficiency indicates that irreversible processes are minimal, which is a key criterion for evaluating new electrode materials.
How does C-rate affect the measurable capacity of a cell?
C-rate (the charge or discharge current expressed as a multiple of the cell’s nominal capacity) has a direct effect on measurable capacity. At higher C-rates, the measured discharge capacity decreases because the solid-state diffusion of lithium ions within electrode particles cannot keep pace with the demand for current, resulting in higher overpotential and earlier voltage cut-off.
At a low C-rate such as C/20, the system is close to thermodynamic equilibrium and the measured capacity approaches the practical maximum for that material. At C/2 or 1C, diffusion limitations become significant and a measurable reduction in delivered capacity is observed. This rate-capability behaviour is itself an important characterisation parameter—plotting capacity against C-rate reveals information about the kinetic limitations of the electrode material and the electrolyte.
Overpotential, the difference between the thermodynamic electrode potential and the actual potential under current, increases with C-rate and is the direct electrochemical cause of this capacity reduction. Researchers studying rate capability must therefore specify the C-rate at which any reported capacity value was measured, as omitting this information makes comparisons between studies unreliable.
How do you measure capacity accurately in a laboratory setting?
Accurate capacity measurement in a laboratory setting requires controlled cell assembly, well-defined electrochemical protocols, and hardware that delivers precise, reproducible current and voltage. The key steps are: preparing electrodes with a known active material mass, assembling cells under controlled conditions (typically in an inert atmosphere), and running galvanostatic cycling at a defined C-rate between specified voltage limits.
Critical factors for reproducible capacity measurements
- Electrode mass accuracy: Specific capacity in mAh/g is only as accurate as the determination of active material mass. Electrode preparation must be consistent, and the mass of binder and conductive additive must be excluded from the calculation.
- Electrolyte volume and distribution: Insufficient electrolyte wetting leads to artificially low capacity. Standardised cell designs with defined electrolyte volumes improve reproducibility.
- Temperature control: Capacity is temperature-dependent. Measurements should be conducted at a defined, stable temperature to allow valid comparisons between experiments.
- Voltage cut-off consistency: The upper and lower voltage limits define the accessible capacity window. These must be held constant across all comparative measurements.
- Formation protocol: The number and rate of formation cycles before capacity measurement affect the stable capacity value, particularly due to SEI development on the anode.
Half-cell testing against a lithium metal reference electrode is common in academic research because it isolates the behaviour of a single electrode material. However, results from half-cells cannot be directly translated to full-cell performance without accounting for differences in the lithium inventory, the counter-electrode contribution, and the absence of a stable reference potential in a full-cell configuration.
How EL-Cell GmbH supports accurate battery capacity measurement
Accurate capacity measurement depends on the quality and consistency of the test-cell hardware used. EL-Cell GmbH designs and manufactures electrochemical test cells and instruments specifically for battery materials research, addressing the reproducibility requirements that underpin reliable capacity data.
- The PAT-Cell provides a standardised, leak-tight cell format with defined electrode geometry and electrolyte volume, reducing cell-to-cell variability in academic and industrial R&D labs.
- The PAT-Tester-i-16 integrates a 16-channel battery tester with a temperature-controlled cell chamber and galvanostatic/potentiostatic capability, including electrochemical impedance spectroscopy (EIS), enabling systematic capacity and rate-capability measurements under controlled conditions.
- The ECD-4-nano electrochemical dilatometer allows simultaneous capacity measurement and electrode thickness monitoring, providing direct insight into the volume changes that contribute to capacity fade.
- EL-Software provides the data acquisition and analysis environment to apply consistent cycling protocols and extract capacity values with full traceability.
If you are setting up a battery materials characterisation workflow or need test-cell hardware designed for reproducible electrochemical measurements, contact EL-Cell GmbH to discuss your experimental requirements.



Comments are closed.