All Classes Interface Summary Class Summary Enum Summary Exception Summary Error Summary
Class |
Description |
AbelianGroup<G> |
An Abelian group is a group with a binary additive operation (+), satisfying the group axioms:
closure
associativity
existence of additive identity
existence of additive opposite
commutativity of addition
|
ABMPredictorCorrector |
The Adams-Bashforth predictor and the Adams-Moulton corrector pair.
|
ABMPredictorCorrector1 |
The Adams-Bashforth predictor and the Adams-Moulton corrector of order 1.
|
ABMPredictorCorrector2 |
The Adams-Bashforth predictor and the Adams-Moulton corrector of order 2.
|
ABMPredictorCorrector3 |
The Adams-Bashforth predictor and the Adams-Moulton corrector of order 3.
|
ABMPredictorCorrector4 |
The Adams-Bashforth predictor and the Adams-Moulton corrector of order 4.
|
ABMPredictorCorrector5 |
The Adams-Bashforth predictor and the Adams-Moulton corrector of order 5.
|
AbsoluteErrorPenalty |
This penalty function sums up the absolute error penalties.
|
AbsoluteTolerance |
The stopping criteria is that the norm of the residual r is equal to
or smaller than the specified tolerance , that is,
||r||2 ≤ tolerance
|
AbstractBivariateEVD |
|
AbstractBivariateProbabilityDistribution |
|
AbstractBivariateRealFunction |
A bivariate real function takes two real arguments and outputs one real value.
|
AbstractHybridMCMC |
Hybrid Monte Carlo, or Hamiltonian Monte Carlo, is a method that combines the traditional
Metropolis algorithm, with molecular dynamics simulation.
|
AbstractMetropolis |
The Metropolis algorithm is a Markov Chain Monte Carlo algorithm, which requires only a function
f proportional to the PDF from which we wish to sample.
|
AbstractR1RnFunction |
This is a function that takes one real argument and outputs one vector value.
|
AbstractRealScalarFunction |
|
AbstractRealVectorFunction |
|
AbstractTrivariateRealFunction |
A trivariate real function takes three real arguments and outputs one real value.
|
AbstractUnivariateRealFunction |
A univariate real function takes one real argument and outputs one real value.
|
ACERAnalysis |
Average Conditional Exceedance Rate (ACER) method is for estimating the cdf of the maxima \(M\)
distribution from observations.
|
ACERAnalysis.Result |
|
ACERByCounting |
Estimate epsilons by counting conditional exceedances from the observations.
|
ACERConfidenceInterval |
Using the given (estimated) ACER function as the mean, find the ACER parameters at the lower and
upper bounds of the estimated confidence interval of ACER values.
|
ACERFunction |
The ACER (Average Conditional Exceedance Rate) function \(\epsilon_k(\eta)\) approximates the
probability
\[
\epsilon_k(\eta) = Pr(X_k > \eta | X_1 \le \eta, X_2 \le \eta, ..., X_{k-1} \le \eta)
\]
for a sequence of stochastic process observations \(X_i\) with a k-step memory.
|
ACERFunction.ACERParameter |
|
ACERInverseFunction |
The inverse of the ACER function.
|
ACERLogFunction |
The ACER function in log scale (base e), i.e., \(log(\epsilon_k(\eta))\).
|
ACERReturnLevel |
Given an ACER function, compute the return level \(\eta\) for a given return period \(R\).
|
ACERUtils |
Utility functions used in ACER empirical analysis.
|
ActiveList |
This interface defines the node popping strategy used in a branch-and-bound algorithm, e.g., depth-first-search, best-first-search.
|
ActiveSet |
This class keeps track of the active and inactive indices.
|
AdamsBashforthMoulton |
This class uses an Adams-Bashford predictor and an Adams-Moulton corrector of the specified
order.
|
AdditiveModel |
The additive model of a time series is an additive composite of the trend, seasonality and irregular random components.
|
ADFAsymptoticDistribution |
This class computes the asymptotic distribution of the Augmented Dickey-Fuller (ADF) test
statistic.
|
ADFAsymptoticDistribution1 |
Deprecated.
|
ADFAsymptoticDistribution1.Type |
the available types of Dickey-Fuller tests
|
ADFDistribution |
This represents an Augmented Dickey Fuller distribution.
|
ADFDistributionTable |
A table contains the simulated observations/values of an empirical ADF distribution for a given set of parameters.
|
ADFDistributionTable_CONSTANT_lag0 |
|
ADFDistributionTable_CONSTANT_TIME_lag0 |
|
ADFDistributionTable_NO_CONSTANT_lag0 |
|
ADFFiniteSampleDistribution |
This class computes the finite sample distribution of the Augmented Dickey-Fuller (ADF) test
statistics.
|
AfterIterations |
Stops after a given number of iterations.
|
AfterNoImprovement |
|
AhatEstimation |
Estimates the coefficient of a VAR(1) model by penalized maximum likelihood.
|
AlternatingDirectionImplicitMethod |
Alternating direction implicit (ADI) method is an implicit method for obtaining numerical
approximations to the solution of a HeatEquation2D .
|
AndersonDarling |
This algorithm calculates the Anderson-Darling k-sample test statistics and p-values.
|
AndersonDarlingPValue |
This algorithm calculates the p-value when the Anderson-Darling statistic and the number of
samples are given.
|
AndStopConditions |
Combines an arbitrary number of stop conditions, terminating when all conditions are met.
|
AnnealingFunction |
An annealing function or a tempered proposal function gives the next proposal/state from the
current state and temperature.
|
AntitheticVariates |
The antithetic variates technique consists, for every sample path obtained,
in taking its antithetic path - that is given a path
\(\varepsilon_1,\dots,\varepsilon_M\) to also take, for
example, \(-\varepsilon_1,\dots,-\varepsilon_M\) or
\(1-\varepsilon_1,\dots,1-\varepsilon_M\).
|
AntoniouLu2007 |
This implementation is based on Algorithm 14.5 in the reference.
|
AR1GARCH11Model |
An AR1-GARCH11 model takes this form.
|
Arc<V> |
An arc is an ordered pair of vertices.
|
ArgumentAssertion |
Utility class for checking numerical arguments.
|
ARIMAForecast |
Forecasts an ARIMA time series using the innovative algorithm.
|
ARIMAForecast.Forecast |
The forecast value and variance.
|
ARIMAForecastMultiStep |
Makes forecasts for a time series assuming an ARIMA model using the innovative algorithm.
|
ARIMAModel |
An ARIMA(p, d, q) process, Xt, is such that
\[
(1 - B)^d X_t = Y_t
\]
where
B is the backward or lag operator, d the order of difference,
Yt an ARMA(p, q) process, for which
\[
Y_t = \mu + \Sigma \phi_i Y_{t-i} + \Sigma \theta_j \epsilon_{t-j} + \epsilon_t,
\]
|
ARIMASim |
This class simulates an ARIMA (AutoRegressive Integrated Moving Average) process.
|
ARIMAXModel |
The ARIMAX model (ARIMA model with eXogenous inputs) is a generalization of the ARIMA model by incorporating exogenous variables.
|
ARMAFit |
This interface represents a fitting method for estimating φ, θ, μ, σ2 in an ARMA model.
|
ARMAForecast |
Forecasts an ARMA time series using the innovative algorithm.
|
ARMAForecastMultiStep |
Computes the h-step ahead prediction of a causal ARMA model, by the innovative algorithm.
|
ARMAForecastOneStep |
Computes the one-step ahead prediction of a causal ARMA model, by the innovative algorithm.
|
ARMAGARCHFit |
This implementation fits, for a data set, an ARMA-GARCH model by Quasi-Maximum Likelihood
Estimation.
|
ARMAGARCHModel |
An ARMA-GARCH model takes this form:
\[
X_t = \mu + \sum_{i=1}^p \phi_i X_{t-i} + \sum_{i=1}^q \theta_j \epsilon_{t-j} + \epsilon_t,
\\
\epsilon_t = \sqrt{h_t\eta_t},
\\
h_t = \alpha_0 + \sum_{i=1}^{r} (\alpha_i e_{t-i}^2) + \sum_{i=1}^{s} (\beta_i h_{t-i})
\]
|
ARMAModel |
A univariate ARMA model, Xt, takes this form.
|
ARMAXModel |
The ARMAX model (ARIMA model with eXogenous inputs) is a generalization of the ARMA model by incorporating exogenous variables.
|
ARModel |
This class represents an AR model.
|
ArrayUtils |
|
ARResamplerFactory |
|
AS159 |
Algorithm AS 159 accepts a table shape (the number of rows and columns), and two vectors, the
lists of row and column sums.
|
AS159.RandomMatrix |
a random matrix generated by AS159 and its probability
|
AtThreshold |
Stops when the value reaches a given value with a given precision.
|
AugmentedDickeyFuller |
The Augmented Dickey Fuller test tests whether a one-time differencing (d = 1) will make the time
series stationary.
|
AutoARIMAFit |
Selects the order and estimates the coefficients of an ARIMA model automatically by AIC or AICC.
|
AutoCorrelation |
Compute the Auto-Correlation Function (ACF) for an AutoRegressive Moving Average (ARMA) model, assuming that
EXt = 0.
|
AutoCorrelationFunction |
This is the auto-correlation function of a univariate time series {xt}.
|
AutoCovariance |
Computes the Auto-CoVariance Function (ACVF) for an AutoRegressive Moving Average (ARMA) model by
recursion.
|
AutoCovarianceFunction |
This is the auto-covariance function of a univariate time series {xt}.
|
AutoParallelMatrixMathOperation |
This class uses ParallelMatrixMathOperation when the first input matrix argument's size
is greater than the defined threshold; otherwise, it uses SimpleMatrixMathOperation .
|
AverageImplicitModelPCA |
This decomposes the observations into a model of one explicit factor, the
average observation per subject, and implicit factors.
|
BackwardElimination |
Constructs a GLM model for a set of observations using the backward elimination method.
|
BackwardElimination.Step |
|
BackwardSubstitution |
Backward substitution solves a matrix equation in the form Ux = b
by an iterative process for an upper triangular matrix U.
|
BanachSpace<B,F extends Field<F> & Comparable<F>> |
A Banach space, B, is a complete normed vector space such that
every Cauchy sequence (with respect to the metric d(x, y) = |x - y|) in B has a limit in B.
|
Bartlett |
Bartlett's test is used to test if k samples are from populations with equal variances, hence homoscedasticity.
|
Basis |
A basis is a set of linearly independent vectors spanning a vector space.
|
BaumWelch |
This implementation trains an HMM model by observations using the Baum–Welch
algorithm.
|
BBNode |
A branch-and-bound algorithm maintains a tree of nodes to keep track of the search paths and the pruned paths.
|
BernoulliTrial |
A Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes,
"success" and "failure", in which the probability of success, p, is the same every time
the experiment is conducted.
|
Best1Bin |
The Best-1-Bin rule is the same as the Rand-1-Bin rule,
except that it always pick the best candidate in the population to be the base.
|
Best2Bin |
The Best-1-Bin rule always picks the best chromosome as the base.
|
Beta |
The beta function defined as:
\[
B(x,y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}= \int_0^1t^{x-1}(1-t)^{y-1}\,dt, x > 0, y > 0
\]
|
BetaDistribution |
The beta distribution is the posterior distribution of the parameter p of a binomial
distribution
after observing α - 1 independent events with probability p and
β - 1 with probability 1 - p,
if the prior distribution of p is uniform.
|
BetaMixtureDistribution |
The HMM states use the Beta distribution to model the observations.
|
BetaMixtureDistribution.Lambda |
the Beta distribution parameters
|
BetaRegularized |
The Regularized Incomplete Beta function is defined as:
\[
I_x(p,q) = \frac{B(x;\,p,q)}{B(p,q)} = \frac{1}{B(p,q)} \int_0^x t^{p-1}\,(1-t)^{q-1}\,dt, p > 0, q > 0
\]
|
BetaRegularizedInverse |
The inverse of the Regularized Incomplete Beta function is defined at:
\[
x = I^{-1}_{(p,q)}(u), 0 \le u \le 1
\]
|
BFGSMinimizer |
The Broyden-Fletcher-Goldfarb-Shanno method is a quasi-Newton method
to solve unconstrained nonlinear optimization problems.
|
BFS<V> |
This class implements the breadth-first-search using iteration.
|
BFS.Node<V> |
This is a node in a BFS-spanning tree.
|
BiconjugateGradientSolver |
The Biconjugate Gradient method (BiCG) is useful for solving non-symmetric
n-by-n linear systems.
|
BiconjugateGradientStabilizedSolver |
The Biconjugate Gradient Stabilized (BiCGSTAB) method is useful for solving
non-symmetric n-by-n linear systems.
|
BicubicInterpolation |
Bicubic interpolation is the two-dimensional equivalent of cubic Hermite
spline interpolation.
|
BicubicInterpolation.PartialDerivatives |
Specify the partial derivatives defined at points on a
BivariateGrid .
|
BicubicSpline |
Bicubic splines are the two-dimensional equivalent of cubic splines.
|
BiDiagonalization |
Given a tall (m x n) matrix A, where m ≥ n,
find orthogonal matrices U and V such that U' * A * V = B.
|
BiDiagonalizationByGolubKahanLanczos |
This implementation uses Golub-Kahan-Lanczos algorithm with reorthogonalization.
|
BiDiagonalizationByHouseholder |
Given a tall (m x n) matrix A, where m ≥ n,
we find orthogonal matrices U and V such that U' * A * V = B.
|
BidiagonalMatrix |
A bi-diagonal matrix is either upper or lower diagonal.
|
BidiagonalMatrix.BidiagonalMatrixType |
the available types of bi-diagonal matrices
|
BidiagonalSVDbyMR3 |
Given a bidiagonal matrix A, computes the singular value decomposition (SVD) of A,
using "Algorithm of Multiple Relatively Robust Representations" (MRRR).
|
BigDecimalUtils |
These are the utility functions to manipulate BigDecimal .
|
BigIntegerUtils |
These are the utility functions to manipulate BigInteger .
|
BilinearInterpolation |
Bilinear interpolation is the 2-dimensional equivalent of linear interpolation.
|
BinomialDistribution |
The binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments,
each of which yields success with probability p.
|
BinomialMixtureDistribution |
The HMM states use the Binomial distribution to model the observations.
|
BinomialMixtureDistribution.Lambda |
the Binomial distribution parameters
|
BinomialRNG |
This random number generator samples from the binomial distribution.
|
Bins<T> |
This class divides the items based on their keys into a number of bins.
|
BisectionRoot |
The bisection method repeatedly bisects an interval and then selects a
subinterval in which a
root must lie for further processing.
|
BivariateArrayGrid |
|
BivariateEVD |
Bivariate Extreme Value (BEV) distribution is the joint distribution of component-wise maxima of
two-dimensional iid random vectors.
|
BivariateEVDAsymmetricLogistic |
The bivariate asymmetric logistic model.
|
BivariateEVDAsymmetricMixed |
The asymmetric mixed model.
|
BivariateEVDAsymmetricNegativeLogistic |
The bivariate asymmetric negative logistic model.
|
BivariateEVDBilogistic |
The bilogistic model.
|
BivariateEVDColesTawn |
The Coles-Tawn model.
|
BivariateEVDHuslerReiss |
The Husler-Reiss model.
|
BivariateEVDLogistic |
The bivariate logistic model.
|
BivariateEVDNegativeBilogistic |
The negative bilogistic model.
|
BivariateEVDNegativeLogistic |
The bivariate negative logistic model.
|
BivariateGrid |
A rectilinear (meaning that grid lines are not necessarily
equally-spaced) bivariate grid of double values.
|
BivariateGridInterpolation |
A bivariate interpolation, which requires the input to form a
rectilinear grid.
|
BivariateProbabilityDistribution |
A bivariate or joint probability distribution for X_1, X_2 is a probability distribution
that gives the probability that each of X_1, X_2, ... falls in any particular range or
discrete set of values specified for that variable.
|
BivariateRealFunction |
A bivariate real function takes two real arguments and outputs one real value.
|
BivariateRegularGrid |
A regular grid is a tessellation of n-dimensional Euclidean space by
congruent parallelotopes (e.g.
|
BlockSplitPointSearch |
Computes the splitting points with the given threshold.
|
BlockWinogradAlgorithm |
This implementation accelerates matrix multiplication via a combination of the Strassen algorithm
and block matrix multiplication.
|
BMSDE |
A Brownian motion is a stochastic process with the following properties.
|
BoltzAnnealingFunction |
Matlab: @annealingboltz - The step has length square root of temperature, with direction
uniformly at random.
|
BoltzTemperatureFunction |
\(T_k = T_0 / ln(k)\).
|
BootstrapEstimator |
This class estimates the statistic of a sample using a bootstrap method.
|
BorderedHessian |
A bordered Hessian matrix consists of the Hessian of a multivariate function f,
and the gradient of a multivariate function g.
|
BottomUp<V> |
This implementation traverses a directed acyclic graph starting from the leaves at the bottom,
and reaches the roots.
|
BoxConstraints |
This represents the lower and upper bounds for a variable.
|
BoxConstraints.Bound |
A bound constraint for a variable.
|
BoxGeneralizedSimulatedAnnealingMinimizer |
|
BoxGSAAcceptanceProbabilityFunction |
This probability function boxes an unconstrained probability function so that when a proposed
state is outside the box, it has a probability of 0.
|
BoxGSAAnnealingFunction |
|
BoxMinimizer<P extends BoxOptimProblem,S extends MinimizationSolution<?>> |
|
BoxMuller |
The Box-Muller transform (by George Edward Pelham Box and Mervin Edgar Muller 1958)
is a pseudo-random number sampling method for generating pairs of independent standard
normally distributed
(zero expectation, unit variance) random numbers,
given a source of uniformly distributed random numbers.
|
BoxOptimProblem |
A box constrained optimization problem, for which a solution must be within fixed bounds.
|
BoxPierce |
Deprecated.
|
BracketSearchMinimizer |
This class provides implementation support for those univariate optimization algorithms that are based on bracketing.
|
BranchAndBound |
Branch-and-Bound (BB or B&B) is a general algorithm for finding optimal solutions of various optimization problems,
especially in discrete and combinatorial optimization.
|
BrentCetaMaximizer |
|
BrentMinimizer |
Brent's algorithm is the preferred method for finding the minimum of a
univariate function.
|
BrentRoot |
Brent's root-finding algorithm combines super-linear convergence with
reliability of bisection.
|
BreuschPagan |
The Breusch-Pagan test tests for conditional heteroskedasticity.
|
BrownForsythe |
The Brown-Forsythe test is a statistical test for the equality of group variances based on performing an ANOVA on a transformation of the response variable.
|
BruteForce<D,R> |
A brute force algorithm, or brute-force search or exhaustive search, also
known as generate and test, is a very general problem-solving technique that
consists of systematically enumerating all possible candidates for the
solution and checking whether each candidate satisfies the problem's
statement.
|
BruteForceIPMinimizer |
This implementation solves an integral constrained minimization problem by
brute force search for all possible integer combinations.
|
BruteForceIPProblem |
This implementation is an integral constrained minimization problem that has enumerable integral domains.
|
BruteForceIPProblem.IntegerDomain |
This specifies the integral domain for an integral variable,
i.e., the integer values the variable can take.
|
BruteForceMinimizer<R extends Comparable<R>> |
This implementation solves an unconstrained minimization problem by
brute force search for all given possible values.
|
Bt |
This is a FiltrationFunction that returns \(B(t_i)\),
the Brownian motion value at the i-th time point.
|
BurlischStoerExtrapolation |
Burlisch-Stoer extrapolation (or Gragg-Bulirsch-Stoer (GBS)) algorithm combines three powerful
ideas: Richardson extrapolation, the use of rational function extrapolation in Richardson-type
applications, and the modified midpoint method, to obtain numerical solutions to ordinary
differential equations (ODEs) with high accuracy and comparatively little computational effort.
|
BurnInRNG |
A burn-in random number generator discards the first M samples.
|
BurnInRVG |
A burn-in random number generator discards the first M samples.
|
C1 |
A function, f, is said to be of class C1 if the derivative f' exists.
|
C2 |
A function, f, is said to be of class C2 if the first and second derivatives, f' and f'', exist.
|
C2OptimProblem |
This is an optimization problem of a real valued function that is twice differentiable.
|
C2OptimProblemImpl |
This is an optimization problem of a real valued function: \(\max_x f(x)\).
|
CartesianProduct<T> |
The Cartesian product can be generalized to the n-ary Cartesian product over
n sets X1, ..., Xn.
|
CaseResamplingReplacement |
This is the classical bootstrap method described in the reference.
|
CaseResamplingReplacementForObject<X> |
This is the classical bootstrap method described in the reference.
|
CauchyPolynomial |
The Cauchy's polynomial of a polynomial takes this form:
|
CentralPath |
A central path is a solution to both the primal and dual problems of a semi-definite programming problem.
|
Ceta |
The function C(η) to be maximized (Eq.
|
Ceta.PortfolioMoments |
|
Ceta.PortfolioMomentsEstimator |
|
CetaMaximizer |
Defines an algorithm to search for the maximal C(η).
|
CetaMaximizer.NegCetaFunction |
|
CetaMaximizer.Solution |
|
ChangeOfVariable |
Change of variable can easy the computation of some integrals, such as improper integrals.
|
CharacteristicPolynomial |
The characteristic polynomial of a square matrix is the function
|
ChebyshevRule |
|
Cheng1978 |
Cheng, 1978, is a new rejection method for generating beta variates.
|
ChiSquareDistribution |
The Chi-square distribution is the distribution of
the sum of the squares of a set of statistically independent standard Gaussian random variables.
|
ChiSquareIndependenceTest |
Pearson's chi-square test of independence assesses whether paired observations on two variables,
expressed in a contingency table, are independent of each other.
|
ChiSquareIndependenceTest.Type |
the available distributions used for the test
|
Chol |
Cholesky decomposition decomposes a real, symmetric (hence square), and
positive definite matrix A into A = L * Lt, where
L is a lower triangular matrix.
|
Cholesky |
Cholesky decomposition decomposes a real, symmetric (hence square), and positive definite matrix
A into A = L * Lt, where L is a lower triangular matrix.
|
CholeskyBanachiewicz |
Cholesky decomposition decomposes a real, symmetric (hence square), and positive definite matrix
A into A = L * Lt, where L is a lower triangular matrix.
|
CholeskyBanachiewiczParallelized |
|
CholeskySparse |
Cholesky decomposition decomposes a real, symmetric (hence square), and positive definite matrix
A into A = L * Lt, where L is a lower triangular matrix.
|
CholeskyWang2006 |
Cholesky decomposition works only for a positive definite matrix.
|
Chromosome |
A chromosome is a representation of a solution to an optimization problem.
|
ClusterAnalyzer |
This class counts clusters of exceedances based on observations above a given threshold, and the
discontinuity of exceedances can be tolerated by an interval length r .
|
Clusters |
Store cluster information obtained by cluster analysis.
|
Clusters.Cluster |
Define the beginning and ending indices (inclusively) of a cluster.
|
CointegrationMLE |
Two or more time series are cointegrated if they each share a common type of
stochastic drift, that is, to a limited degree they share a certain type of
behavior in terms of their long-term fluctuations, but they do not
necessarily move together and may be otherwise unrelated.
|
ColumnBindMatrix |
A fast "cbind" matrix from vectors.
|
CombinedCetaMaximizer |
Searches the maximum C(η) by an array of given maximizers, being
tried in sequence.
|
CombinedVectorByRef |
For efficiency, this wrapper concatenates two or more vectors by references
(without data copying).
|
CommonRandomNumbers |
The common random numbers is a variance reduction technique to apply when we
are comparing two random systems, e.g., \(E(f(X_1) - g(X_2))\).
|
Complex |
A complex number is a number consisting of a real number part and an imaginary number part.
|
ComplexMatrix |
|
CompositeDoubleArrayOperation |
It is desirable to have multiple implementations and switch between them for, e.g., performance
reason.
|
CompositeDoubleArrayOperation.ImplementationChooser |
Specify which implementation to use.
|
CompositeLinearCongruentialGenerator |
A composite generator combines a number of simple
LinearCongruentialGenerator , such as Lehmer , to form one
longer period generator by first summing values and then taking modulus.
|
ConcurrentCachedGenerator<T> |
A generic wrapper that makes an underlying item generator thread-safe by
caching generated items in a concurrently-accessible list.
|
ConcurrentCachedGenerator.Generator<T> |
Defines a generic generator of type T .
|
ConcurrentCachedRLG |
This is a fast thread-safe wrapper for random long generators.
|
ConcurrentCachedRNG |
This is a fast thread-safe wrapper for random number generators.
|
ConcurrentCachedRVG |
This is a fast thread-safe wrapper for random vector generators.
|
ConcurrentStandardNormalRNG |
|
ConditionalSumOfSquares |
The method Conditional Sum of Squares (CSS) fits an ARIMA model by minimizing
the conditional sum of squares.
|
ConfidenceInterval |
This class stores information for a list of confidence intervals, with the same confidence level.
|
CongruentMatrix |
Given a matrix A and an invertible matrix P, we create the congruent matrix
B s.t.,
B = P'AP
|
ConjugateGradientMinimizer |
A conjugate direction optimization method is performed by using sequential
line search along directions that bear a strict mathematical relationship to
one another.
|
ConjugateGradientNormalErrorSolver |
For an under-determined system of linear equations, Ax = b, or
when the coefficient matrix A is non-symmetric and nonsingular,
the normal equation matrix AAt is symmetric and
positive definite, and hence CG is applicable.
|
ConjugateGradientNormalResidualSolver |
For an under-determined system of linear equations, Ax = b, or
when the coefficient matrix A is non-symmetric and nonsingular,
the normal equation matrix AAt is symmetric and
positive definite, and hence CG is applicable.
|
ConjugateGradientSolver |
The Conjugate Gradient method (CG) is useful for solving a symmetric n-by-n
linear system.
|
ConjugateGradientSquaredSolver |
The Conjugate Gradient Squared method (CGS) is useful for solving
a non-symmetric n-by-n linear system.
|
ConstantDriftVector |
The class represents a constant drift function.
|
Constants |
This class lists the global parameters and constants in this nmdev library.
|
ConstantSeeder<T extends Seedable> |
A wrapper that seeds each given seedable random number generator with the given seed(s).
|
ConstantSigma1 |
The class represents a constant diffusion coefficient function.
|
ConstantSigma2 |
Deprecated.
|
ConstrainedCellFactory |
This defines a Differential Evolution operator that takes in account constraints.
|
ConstrainedLASSObyLARS |
This class solves the constrained form of LASSO by modified least angle regression (LARS) and
linear interpolation:
\[
\min_w \left \{ \left \| Xw - y \right \|_2^2 \right \}\) subject to \( \left \| w \right \|_1
\leq t
\]
|
ConstrainedLASSObyQP |
This class solves the constrained form of LASSO (i.e.\(\min_w \left \{ \left \| Xw - y \right
\|_2^2 \right \}\)
subject to \( \left \| w \right \|_1 \leq t \)) by transforming it into a single quadratic
programming problem with (2 * m + 1) constraints, where m is the number of
columns of the design matrix.
|
ConstrainedLASSOProblem |
A LASSO (least absolute shrinkage and selection operator) problem focuses on solving an RSS
(residual sum of squared errors) problem with L1 regularization.
|
ConstrainedMinimizer<P extends ConstrainedOptimProblem,S extends MinimizationSolution<?>> |
|
ConstrainedOptimProblem |
A constrained optimization problem takes this form.
|
ConstrainedOptimProblemImpl1 |
This implements a constrained optimization problem for a function f
subject to equality and less-than-or-equal-to constraints.
|
ConstrainedOptimSubProblem |
A constrained optimization sub-problem takes this form.
|
Constraints |
A set of constraints for a (real-valued) optimization problem is a set of functions.
|
ConstraintsUtils |
These are the utility functions for manipulating Constraints.
|
ContextRNG<T> |
This uniform number generator generates independent sequences of random numbers per context.
|
ContinuedFraction |
A continued fraction representation of a number has this form:
\[
z = b_0 + \cfrac{a_1}{b_1 + \cfrac{a_2}{b_2 + \cfrac{a_3}{b_3 + \cfrac{a_4}{b_4 + \ddots\,}}}}
\]
ai and bi can be functions of x, which in turn makes z a function of x.
|
ContinuedFraction.MaxIterationsExceededException |
RuntimeException thrown when the continued fraction fails to converge for a given epsilon before a certain number of iterations.
|
ContinuedFraction.Partials |
This interface defines a continued fraction in terms of
the partial numerators an, and the partial denominators bn.
|
ControlVariates |
Control variates method is a variance reduction technique that exploits
information about the errors in estimates of known quantities to reduce the
error of an estimate of an unknown quantity.
|
ControlVariates.Estimator |
|
ConvectionDiffusionEquation1D |
The convection–diffusion equation is a combination of the diffusion and
convection (advection) equations, and describes physical phenomena where
particles, energy, or other physical quantities are transferred inside a
physical system due to two processes: diffusion and convection.
|
ConvergenceFailure |
|
ConvergenceFailure.Reason |
the reasons for the convergence failure
|
CorrelationCheck |
|
CorrelationMatrix |
The correlation matrix of n random variables X1, ...,
Xn is the n × n matrix whose i,j entry is
corr(Xi, Xj), the correlation between
X1 and Xn.
|
Corvalan2005 |
This paper tackles the corner solution problem of many portfolio optimizers, by
optimizing the portfolio diversification with some relaxation on the volatility σ
and the expected return R of a given optimized (but non-diversified) portfolio.
|
Corvalan2005.WeightsConstraint |
Constraints on weights which are defined by a set of less-than constraints.
|
Counter |
A counter keeps track of the number of occurrences of numbers.
|
CountMonitor<S> |
This IterationMonitor counts the number of iterates generated, hence
the number of iterations.
|
CourantPenalty |
This penalty function sums up the squared error penalties.
|
Covariance |
Covariance is a measure of how much two variables change together.
|
CovarianceEstimation |
Estimates the covariance matrix by maximum likelihood.
|
CovarianceSelectionGLASSOFAST |
GLASSOFAST is the Graphical LASSO algorithm to solve the covariance selection
problem.
|
CovarianceSelectionLASSO |
The LASSO approach of covariance selection.
|
CovarianceSelectionProblem |
This class defines the covariance selection problem outlined in d'Aspremont
(2008).
|
CovarianceSelectionSolver |
|
CramerVonMises2Samples |
This algorithm calculates the two sample Cramer-Von Mises test statistic and p-value.
|
CrankNicolsonConvectionDiffusionEquation1D |
This class uses the Crank-Nicolson scheme to obtain a numerical solution of a
one-dimensional convection-diffusion PDE.
|
CrankNicolsonConvectionDiffusionEquation1D.Coefficients |
Gets the coefficients of a discretized 1D convection-diffusion equation
for each time step.
|
CrankNicolsonHeatEquation1D |
The Crank-Nicolson method is an algorithm for obtaining a numerical solution
to parabolic PDE problems.
|
CrankNicolsonHeatEquation1D.Coefficients |
Gets the coefficients of a discretized 1D heat equation for each time
step.
|
CSDPMinimizer |
Implements the CSDP algorithm for semidefinite programming problem with equality constraints.
|
CSRSparseMatrix |
The Compressed Sparse Row (CSR) format for sparse matrix has this
representation:
(value, col_ind, row_ptr) .
|
CubicHermite |
Cubic Hermite spline interpolation is a piecewise spline interpolation, in
which each polynomial is in Hermite form which consists of two control points
and two control tangents.
|
CubicHermite.Tangent |
The method for computing the control tangent at a given index.
|
CubicHermite.Tangents |
|
CubicRoot |
This is a cubic equation solver.
|
CubicSpline |
The cubic spline interpolation fits a cubic polynomial between each pair of
adjacent points such that adjacent cubics are continuous in their first and
second derivatives.
|
CumulativeNormalHastings |
Hastings algorithm is faster but less accurate way to compute the cumulative standard Normal.
|
CumulativeNormalInverse |
The inverse of the cumulative standard Normal distribution function is defined as:
\[
N^{-1}(u)
/]
|
CumulativeNormalMarsaglia |
Marsaglia is about 3 times slower but is more accurate to compute the cumulative standard Normal.
|
CurveFitting |
Curve fitting is the process of constructing a curve, or mathematical function, that has the best
fit to a series of data points, possibly subject to constraints.
|
DAgostino |
D'Agostino's K2 test is a goodness-of-fit measure of departure from normality.
|
DAGraph<V,E extends Arc<V>> |
A directed acyclic graph (DAG), is a directed graph with no directed cycles.
|
Dai2011HMM |
Creates a two-state Geometric Brownian Motion with a constant volatility.
|
Dai2011HMM.CalibrationParam |
|
Dai2011HMM.ModelParam |
|
Dai2011Solver |
Solves the stochastic control problem in the referenced paper to get the two
thresholds.
|
Dai2011Solver.Boundaries |
|
Dai2011Solver.Builder |
|
DateTimeGenericTimeSeries<V> |
This is a generic time series where time is indexed by LocalDateTime
and value can be any type.
|
DateTimeGenericTimeSeries.Entry<V> |
This is the TimeSeries.Entry for a DateTime -indexed time
series.
|
DateTimeTimeSeries |
This is a time series has its double values indexed by
LocalDateTime .
|
DBeta |
This is the first order derivative function of the Beta function w.r.t x, \({\partial \over \partial x} \mathrm{B}(x, y)\).
|
DBetaRegularized |
This is the first order derivative function of the Regularized Incomplete Beta function, BetaRegularized , w.r.t the upper limit, x.
|
DeepCopyable |
This interface provides a way to do polymorphic copying.
|
DefaultDeflationCriterion |
The default deflation criterion is to use eq.
|
DefaultDeflationCriterion.MatrixNorm |
Computes the norm of a given matrix.
|
DefaultMatrixStorage |
There are multiple ways to implement the storage data structure depending on the matrix type for
optimization purpose.
|
DefaultSimplex |
A simplex optimization algorithm, e.g., Nelder-Mead, requires an initial simplex to start the search.
|
Deflation |
A deflation found in a Hessenberg (or tridiagonal in symmetric case) matrix.
|
DeflationCriterion |
Determines whether a sub-diagonal entry is sufficiently small to be neglected.
|
DenseData |
This implementation of the storage of a dense matrix stores the data of a 2D matrix as an 1D
array.
|
DenseMatrix |
This class implements the standard, dense, double based matrix
representation.
|
DenseMatrixMultiplication |
Matrix operation that multiplies two matrices.
|
DenseMatrixMultiplicationByBlock |
|
DenseMatrixMultiplicationByBlock.BlockAlgorithm |
|
DenseMatrixMultiplicationByIjk |
Implements the naive IJK algorithm.
|
DenseVector |
This class implements the standard, dense, double based vector
representation.
|
Densifiable |
This interface specifies whether a matrix implementation can be efficiently converted to the standard dense matrix representation.
|
DEOptim |
Differential Evolution (DE) is a global optimization method.
|
DEOptim.NewCellFactory |
This factory constructs a new DEOptimCellFactory for each minimization problem.
|
DEOptimCellFactory |
|
DErf |
This is the first order derivative function of the Error function, Erf .
|
DerivativeFunction |
Defines the derivative function F(x, y) for ODE problems.
|
Dfdx |
The first derivative is a measure of how a function changes as its input changes.
|
Dfdx.Method |
the available methods to compute the numerical derivative
|
DFPMinimizer |
The Davidon-Fletcher-Powell method is a quasi-Newton method to solve
unconstrained nonlinear optimization problems.
|
DFS<V> |
This class implements the depth-first-search using iteration.
|
DFS.Node<V> |
This is a node in a DFS-spanning tree.
|
DFS.Node.Color |
This is the coloring scheme of visits.
|
DGamma |
This is the first order derivative function of the Gamma function, \({d \mathrm{\Gamma}(x) \over dx}\).
|
DGaussian |
This is the first order derivative function of a Gaussian function, \({d \mathrm{\phi}(x) \over dx}\).
|
DiagonalMatrix |
A diagonal matrix has non-zero entries only on the main diagonal.
|
DiagonalSum |
Add diagonal elements to a matrix, an efficient implementation.
|
DifferencedIntTimeTimeSeries |
Differencing of a time series xt in discrete time t is the
transformation of the series to a new time series (1-B)xt where the new values
are the differences between consecutive values of xt.
|
Diffusion |
This class represents the diffusion term, σ, of a univariate SDE.
|
DiffusionMatrix |
The diffusion term, σ, of an SDE takes this form: σ(dt, Xt, Zt, ...).
|
DiffusionSigma |
This class implements the diffusion term in the form of a diffusion matrix.
|
Digamma |
The digamma function is defined as the logarithmic derivative of the gamma function.
|
DiGraph<V,E extends Arc<V>> |
A directed graph or digraph is a graph, or set of nodes connected by edges, where the edges have
a direction associated with them.
|
Dijkstra<V> |
Dijkstra's algorithm is a graph search algorithm that solves the single-source shortest path
problem for a graph with non-negative edge path costs, producing a shortest path tree.
|
DimensionCheck |
These are the utility functions for checking table dimension.
|
DirichletDistribution |
The Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted Dir(a),
is a family of continuous multivariate probability distributions parametrized by a vector
a of positive reals.
|
DiscreteHMM |
This is the discrete hidden Markov model as defined by Rabiner.
|
DiscreteSDE |
This interface represents the discrete approximation of a univariate SDE.
|
DiversificationMeasure |
Defines the diversification of a portfolio.
|
DividedDifferences |
Divided differences is recursive division process for calculating the coefficients in the
interpolation polynomial in the Newton form.
|
DLM |
This is the multivariate controlled DLM (controlled Dynamic Linear Model) specification.
|
DLMSeries |
This is a simulator for a multivariate controlled dynamic linear model process.
|
DLMSeries.Entry |
This is the TimeSeries.Entry for a univariate DLM time series.
|
DLMSim |
This is a simulator for a univariate controlled dynamic linear model process.
|
DLMSim.Innovation |
a simulated innovation
|
DOKSparseMatrix |
The Dictionary Of Key (DOK) format for sparse matrix uses the coordinates of
non-zero entries in the matrix as keys.
|
Doolittle |
Doolittle algorithm is a LU decomposition algorithm which decomposes a square matrix
A into:
P is an n x n permutation matrix;
L is an n x n (unit) lower triangular matrix;
U is an n x n upper triangular matrix,
such that PA = LU.
|
DoubleArrayMath |
These are the math functions that operate on double[] .
|
DoubleArrayOperation |
It is possible to provide different implementations for different platforms, hardware, etc.
|
DoubleBruteForceMinimizer |
This implementation solves an unconstrained minimization problem by
brute force search for all given possible values.
|
DoubleExponential |
This transformation speeds up the convergence of the Trapezoidal Rule exponentially.
|
DoubleExponential4HalfRealLine |
This transformation is good for the region \((0, +\infty)\).
|
DoubleExponential4RealLine |
This transformation is good for the region \((-\infty, +\infty)\).
|
DoubleUtils |
These are the utility functions to manipulate double and int .
|
DoubleUtils.ifelse |
Return a value with the same shape as test which is filled with
elements selected from either yes or no depending on
whether the element of test is true or false .
|
DoubleUtils.RoundingScheme |
the available schemes to round a number
|
DoubleUtils.which |
Decide whether x satisfies the boolean test.
|
DPolynomial |
This is the first order derivative function of a Polynomial , which, again, is a polynomial.
|
DQDS |
Computes all the eigenvalues of the symmetric positive definite tridiagonal matrix associated
with the qd-array Z to high relative accuracy.
|
Drift |
This class represents the drift term, μ, of a univariate SDE.
|
DriftVector |
The drift term, μ, of an SDE takes this form: μ(dt, Xt, Zt, ...).
|
DuplicatedAbscissae |
This exception is thrown when a function has two same x-abscissae, hence ill-defined.
|
DynamicCreator |
Performs the Dynamic Creation algorithm (DC) to generate parameters for MersenneTwister .
|
DynamicCreatorException |
Indicates that a problem has occurred in the dynamic creation process, usually because suitable
parameters were not found.
|
Edge<V> |
An edge connects a pair of vertices.
|
EdgeBetweeness<V> |
The edge betweenness centrality is defined as the number of the shortest paths that go through an
edge in a graph or network.
|
Eigen |
Given a square matrix A, an eigenvalue λ and its associated
eigenvector v are defined by Av = λv.
|
Eigen.Method |
the available methods to compute eigenvalues and eigenvectors
|
EigenBoundUtils |
Utility methods for computing bounds of eigenvalues.
|
EigenCount |
Counts the number of eigenvalues in a symmetric tridiagonal matrix T that are less than a
given value x.
|
EigenCountInRange |
Finds the number of eigenvalues of the symmetric tridiagonal matrix T that are in a given
interval.
|
EigenDecomposition |
Let A be a square, diagonalizable N × N matrix with N
linearly independent eigenvectors.
|
EigenProperty |
EigenProperty is a read-only structure that contains the information
about a particular eigenvalue,
such as its multiplicity and eigenvectors.
|
EigenvalueByDQDS |
Computes all the eigenvalues of a symmetric tridiagonal matrix.
|
ElementaryFunction |
This class contains some elementary functions for complex number, Complex .
|
ElementaryOperation |
There are three elementary row operations which are equivalent to left multiplying an elementary
matrix.
|
EliminationByAIC |
In each step, a factor is dropped if the resulting model has the least AIC, until no factor
removal can result in a model with AIC lower than the current AIC.
|
EliminationByZValue |
In each step, the factor with the least z-value is dropped, until all z-values are greater than
the critical value (given by the significance level).
|
Elliott2005DLM |
This class implements the Kalman filter model as in Elliott's paper.
|
ElliottOnlineFilter |
It is important to note that this algorithm does not guarantee that
A > 0
0 < B < 1
Therefore, we need to check the outputs.
|
ElliottOnlineFilter.NoModelFitted |
|
EmpiricalACER |
This class contains empirical ACER \(\hat{\epsilon_k}(\eta_i)\) values and other related
statistics estimated from observations.
|
EmpiricalACEREstimation |
This class estimates empirical ACER values from the given observations.
|
EmpiricalACERStatistics |
This class contains the computed statistics of the estimated ACERs.
|
EmpiricalDistribution |
An empirical cumulative probability distribution function
is a cumulative probability distribution function that
assigns probability 1/n at each of the n numbers in a sample.
|
EpsilonStatisticsCalculator |
Compute statistics: mean, confidence interval of estimated ACER values \(\epsilon_k(\eta_i)\).
|
EqualityConstraints |
The domain of an optimization problem may be restricted by equality constraints.
|
Erf |
The Error function is defined as:
\[
\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}}\int_{0}^x e^{-t^2} dt
\]
|
Erfc |
This complementary Error function is defined as:
\[
\operatorname{erfc}(x)
= 1-\operatorname{erf}(x)
= \frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}\,dt
\]
|
ErfInverse |
The inverse of the Error function is defined as:
\[
\operatorname{erf}^{-1}(x)
\]
|
ErgodicHybridMCMC |
A simple decorator which will randomly vary dt between each sample.
|
EstimateByLogLikelihood |
Result from maximum likelihood fitting algorithm, which contains:
the log-likelihood function,
the fitted parameters for the target model,
the variance-covariance matrix,
the standard errors,
the confidence intervals.
|
Estimator |
|
EulerMethod |
The Euler method is a first-order numerical procedure for solving ordinary differential equations
(ODEs) with a given initial value.
|
EulerSDE |
The Euler scheme is the first order approximation of an SDE.
|
EvenlySpacedGrid |
This is an evenly spaced time grid.
|
ExceptionUtils |
Exception-related utility functions.
|
ExpectationAtEndTime |
This class computes the expectation (mean) and the variance of a stochastic process,
by Monte Carlo simulation, at the end of an interval: \(E(X_T)\).
|
ExplicitCentralDifference1D |
This explicit central difference method is a numerical technique for solving
the one-dimensional wave equation by the following explicit
three-point central difference equation.
|
ExplicitCentralDifference2D |
This explicit central difference method is a numerical technique for solving
the two-dimensional wave equation by the following explicit
three-point central difference equation.
|
ExplicitImplicitModelPCA |
Given a time series of vectored observations, we decompose them
into a reduced dimension of linear sum of both explicit/specified and
implicit factors.
|
ExplicitImplicitModelPCA.Result |
|
Exponential |
This transformation is good for when the lower limit is finite, the upper limit is infinite, and the integrand falls off exponentially.
|
ExponentialDistribution |
The exponential distribution describes the times between events in a Poisson process,
a process in which events occur continuously and independently at a constant average rate.
|
ExponentialFamily |
The exponential family is an important class of probability distributions sharing this particular
form.
|
ExponentialMixtureDistribution |
The HMM states use the Exponential distribution to model the observations.
|
ExpTemperatureFunction |
Logarithmic decay, where \(T_k = T_0 * 0.95^k\).
|
ExtremalGeneralizedEigenvalueByGreedySearch |
Solves
\[ \min_x \frac{x'Ax}{x'Bx} \\ \textup{s.t.,} \mathbf{Card}(x)
\leqslant k, \left \| x \right \| = 1 \]
|
ExtremalGeneralizedEigenvalueBySDP |
Solves the problem described in Section 3.2, d'Aspremont (2008).
|
ExtremalGeneralizedEigenvalueSolver |
|
ExtremalIndexByClusterSizeReciprocal |
This class estimates the extremal index by the reciprocal of the average cluster size.
|
ExtremalIndexByFerroSeegers |
This class estimates the extremal index from observations by the algorithm proposed by Ferro and
Seegers.
|
ExtremalIndexEstimation |
The extremal index \(\theta \in [0,1]\) characterizes the degree of local dependence in the
extremes of a stationary time series.
|
ExtremeValueMC |
Simulation of first order extreme value Markov chains such that each pair of consecutive values
has the dependence structure of given bivariate extreme value distributions.
|
ExtremeValueMC.MarginalDistributionType |
Types of marginal distribution of each simulated value.
|
F |
The F-test tests whether two normal populations have the same variance.
|
F_Sum_BtDt |
This represents a function of this integral
\[
I = \int_{0}^{1} B(t)dt
\]
|
F_Sum_tBtDt |
This represents a function of this integral
\[
\int_{0}^{1} (t - 0.5) * B(t) dt
\]
|
FactorAnalysis |
Factor analysis is a statistical method used to describe variability among
observed variables in terms of a potentially lower number of unobserved
variables called factors.
|
FactorAnalysis.ScoringRule |
These are the different ways to compute the factor analysis scores.
|
FAEstimator |
These are the estimators (estimated psi, loading matrix, scores, degrees of
freedom, test statistics, p-value, etc.) from the factor analysis MLE
optimization.
|
FastAnnealingFunction |
Matlab default: @annealingfast - The step has length temperature, with direction uniformly at
random.
|
FastKroneckerProduct |
This is a fast and memory-saving implementation of computing the Kronecker product.
|
FastTemperatureFunction |
Linear decay, where \(T_k = T_0 / k\).
|
FDistribution |
The F distribution is the distribution of the ratio of two independent chi-squared variates.
|
FerrisMangasarianWrightPhase1 |
The phase 1 procedure finds a feasible table from an infeasible one by pivoting the simplex table of a related problem.
|
FerrisMangasarianWrightPhase2 |
This implementation solves a canonical linear programming problem that does not need preprocessing its simplex table.
|
FerrisMangasarianWrightScheme2 |
The scheme 2 procedure removes equalities and free variables.
|
Fibonacci |
A Fibonacci sequence starts with 0 and 1 as the first two numbers.
|
FibonaccMinimizer |
The Fibonacci search is a dichotomous search where a bracketing interval is
sub-divided by the Fibonacci ratio.
|
Field<F> |
As an algebraic structure, every field is a ring, but not every ring is a field.
|
Field.InverseNonExistent |
This is the exception thrown when the inverse of a field element does not exist.
|
Filter |
A filter, for signal processing, takes (real) input signal and transforms it to (real) output signal.
|
Filtration |
This class represents the filtration information known at the end of time.
|
FiltrationFunction |
A filtration function, parameterized by a fixed filtration, is a function of time,
\(f(\mathfrak{F_{t_i}})\).
|
FiniteDifference |
A finite difference (divided by a small increment) is an approximation of the
derivative of a function.
|
FiniteDifference.Type |
the available types of finite difference
|
FirstGeneration |
This interface allows customization of how the first pool of chromosomes is generated.
|
FirstOrderMinimizer |
This implements the steepest descent line search using the first order
expansion of the Taylor's series.
|
FirstOrderMinimizer.Method |
the available methods to do line search
|
FisherExactDistribution |
Fisher's exact test distribution is, as its name states, exact, and can therefore be used
regardless of the sample characteristics.
|
FixedEffectsModel |
Fits the panel data to this linear model:
\[
y_{it} = \alpha_{i}+X_{it}\mathbf{\beta}+u_{it}
\]
where \(y_{it}\) is the dependent variable observed for individual \(i\) at
time \(t\), \(X_{it}\) is the time-variant \(1\times K\) regressor matrix,
\(\alpha_{i}\) is the unobservable time-invariant individual effect and
\(u_{it}\) is the error term.
|
FletcherLineSearch |
This is Fletcher's inexact line search method.
|
FletcherPenalty |
This penalty function sums up the squared costs penalties.
|
FletcherReevesMinimizer |
The Fletcher-Reeves method is a variant of the Conjugate-Gradient method.
|
FlexibleTable |
This is a 2D table that can shrink or grow by row or by column.
|
FloatingLicenseServer |
|
Forest<V,E extends HyperEdge<V>> |
A forest is a disjoint union of trees.
|
ForwardBackwardProcedure |
The forward-backward procedure is an inference algorithm for hidden Markov
models which computes the posterior marginals of all hidden state variables
given a sequence of observations.
|
ForwardSelection |
Constructs a GLM model for a set of observations using the forward selection method.
|
ForwardSelection.Step |
|
ForwardSubstitution |
Forward substitution solves a matrix equation in the form Lx = b
by an iterative process for a lower triangular matrix L.
|
FrechetDistribution |
The Fréchet distribution is a special case (Type II) of the generalized extreme value
distribution, with \(\xi>0\).
|
Ft |
This represents the concept 'Filtration', the information available at time t.
|
FtAdaptedFunction |
This represents an Ft-adapted function that depends on X(t), B(t), or even on the whole past path of B(s), s ≤ t.
|
FtAdaptedRealFunction |
This represents an Ft-adapted function that depends on X(t), B(t), or even on the whole past path of B(s), s ≤ t.
|
FtAdaptedVectorFunction |
This represents a vector-valued Ft-adapted function that depends on X(t), B(t), or even on the whole past path of B(s), s ≤ t.
|
FtWt |
This is a filtration implementation that includes the path-dependent information,
Wt.
|
Function<D,R> |
The mathematical concept of a function expresses the idea that one quantity (the argument of the function, also known as the input) completely determines another quantity (the value, or output).
|
Function.EvaluationException |
|
FunctionOps |
These are some commonly used mathematical functions.
|
Gamma |
The Gamma function is an extension of the factorial function to real and complex numbers, with its argument shifted down by 1.
|
GammaDistribution |
This gamma distribution, when k is an integer, is the distribution of
the sum of k independent exponentially distributed random variables,
each of which has a mean of θ (which is equivalent to a rate parameter of
θ-1).
|
GammaGergoNemes |
The Gergo Nemes' algorithm is very simple and quick to compute the Gamma function, if accuracy is not critical.
|
GammaLanczos |
Lanczos approximation provides a way to compute the Gamma function such that the accuracy can be made arbitrarily precise.
|
GammaLanczosQuick |
Lanczos approximation, computations are done in double .
|
GammaLowerIncomplete |
The Lower Incomplete Gamma function is defined as:
\[
\gamma(s,x) = \int_0^x t^{s-1}\,e^{-t}\,{\rm d}t = P(s,x)\Gamma(s)
\]
P(s,x) is the Regularized Incomplete Gamma P function.
|
GammaMixtureDistribution |
The HMM states use the Gamma distribution to model the observations.
|
GammaMixtureDistribution.Lambda |
the Gamma distribution parameters
|
GammaRegularizedP |
The Regularized Incomplete Gamma P function is defined as:
\[
P(s,x) = \frac{\gamma(s,x)}{\Gamma(s)} = 1 - Q(s,x), s \geq 0, x \geq 0
\]
|
GammaRegularizedPInverse |
The inverse of the Regularized Incomplete Gamma P function is defined as:
\[
x = P^{-1}(s,u), 0 \geq u \geq 1
\]
When s > 1 , we use the asymptotic inversion method.
When s <= 1 , we use an approximation of P(s,x) together with a higher-order Newton like method.
In both cases, the estimated value is then improved using Halley's method, c.f., HalleyRoot .
|
GammaRegularizedQ |
The Regularized Incomplete Gamma Q function is defined as:
\[
Q(s,x)=\frac{\Gamma(s,x)}{\Gamma(s)}=1-P(s,x), s \geq 0, x \geq 0
\]
The algorithm used for computing the regularized incomplete Gamma Q function depends on the values of s and x.
|
GammaUpperIncomplete |
The Upper Incomplete Gamma function is defined as:
\[
\Gamma(s,x) = \int_x^{\infty} t^{s-1}\,e^{-t}\,{\rm d}t = Q(s,x) \times \Gamma(s)
\]
The integrand has the same form as the Gamma function, but the lower limit of the integration is a variable.
|
GARCH11Model |
An GARCH11 model takes this form.
|
GARCHFit |
This implementation fits, for a data set, a Generalized Autoregressive Conditional
Heteroscedasticity (GARCH) model
by maximizing the likelihood function using the gradient information.
|
GARCHFit.GRADIENT |
the available methods to compute the gradient to guild the optimization search
|
GARCHModel |
The GARCH(p, q) model takes this form.
|
GARCHResamplerFactory |
|
GARCHResamplerFactory2 |
|
GARCHSim |
This class simulates the GARCH models of this form.
|
GaussChebyshevQuadrature |
Gauss-Chebyshev Quadrature uses the following weighting function:
\[
w(x) = \frac{1}{\sqrt{1 - x^2}}
\]
to evaluate integrals in the interval (-1, 1).
|
GaussHermiteQuadrature |
Gauss-Hermite quadrature exploits the fact that quadrature approximations are open integration
formulas (that is, the values of the endpoints are not required) to evaluate of integrals in the
range \((-\infty, \infty )\).
|
Gaussian |
The Gaussian function is defined as:
\[
f(x) = a e^{- { \frac{(x-b)^2 }{ 2 c^2} } }
\]
|
GaussianElimination |
The Gaussian elimination performs elementary row operations to reduce a matrix to the row echelon form.
|
GaussianElimination4SquareMatrix |
|
GaussianProposalFunction |
A proposal generator where each perturbation is a random vector, where each element is drawn
from a standard Normal distribution, multiplied by a scale matrix.
|
GaussianQuadrature |
A quadrature rule is a method of numerical integration in which we approximate the integral of a
function by a weighted sum of sample points.
|
GaussianQuadratureRule |
This interface defines a Gaussian quadrature rule used in Gaussian quadrature.
|
GaussJordanElimination |
Gauss-Jordan elimination performs elementary row operations to reduce a matrix to the reduced row echelon form.
|
GaussLaguerreQuadrature |
Gauss-Laguerre quadrature exploits the fact that quadrature approximations are open integration
formulas (i.e.
|
GaussLegendreQuadrature |
Gauss-Legendre quadrature considers the simplest case of uniform weighting: \(w(x) = 1\).
|
GaussNewtonMinimizer |
The Gauss-Newton method is a steepest descent method to minimize a real
vector function in the form:
/[
f(x) = [f_1(x), f_2(x), ..., f_m(x)]'
/]
The objective function is
/[
F(x) = f' %*% f
]/
|
GaussNewtonMinimizer.MySteepestDescent |
|
GaussSeidelSolver |
Similar to the Jacobi method, the Gauss-Seidel method (GS)
solves each equation in sequential order.
|
GBMProcess |
A Geometric Brownian motion (GBM) (occasionally, exponential Brownian motion) is
a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion.
|
GeneralConstraints |
The real-valued constraints define the domain (feasible regions) for a real-valued objective
function in a constrained optimization problem.
|
GeneralEqualityConstraints |
This is the collection of equality constraints for an optimization problem.
|
GeneralGreaterThanConstraints |
This is the collection of greater-than-or-equal-to constraints for an optimization problem.
|
GeneralizedConjugateResidualSolver |
The Generalized Conjugate Residual method (GCR) is useful for solving
a non-symmetric n-by-n linear system.
|
GeneralizedEVD |
Generalized extreme value (GEV) distribution is a family of continuous probability distributions
developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families
also known as type I, II and III extreme value distributions.
|
GeneralizedLinearModel |
The Generalized Linear Model (GLM) is a flexible generalization of the Ordinary Least Squares
regression.
|
GeneralizedLinearModelQuasiFamily |
GLM for the quasi-families.
|
GeneralizedMinimalResidualSolver |
The Generalized Minimal Residual method (GMRES) is useful for solving a non-symmetric n-by-n
linear system.
|
GeneralizedParetoDistribution |
Generalized Pareto distribution (GPD) is used for modeling exceedances over (or shortfalls below)
a threshold.
|
GeneralizedSimulatedAnnealingMinimizer |
|
GeneralLessThanConstraints |
This is the collection of less-than or equal-to constraints for an optimization problem.
|
GenericFieldMatrix<F extends Field<F>> |
This is a generic matrix over a Field .
|
GenericMatrix<T extends GenericMatrix<T,F>,F extends Field<F>> |
This class defines a matrix over a field.
|
GenericMatrixAccess<F extends Field<F>> |
This interface defines the methods for accessing entries in a matrix over a field.
|
GenericTimeTimeSeries<T extends Comparable<? super T>> |
This is a univariate time series indexed by some notion of time.
|
GeneticAlgorithm |
A genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution.
|
Getvec |
Computes the (scaled) r-th column of the inverse of the sub-matrix block of the
tridiagonal matrix T = LDLT - λ I.
|
GEVFittingByMaximumLikelihood |
Estimate the GeneralizedEVD parameter from the observations by
maximum likelihood approach.
|
GirvanNewman<V,E extends UndirectedEdge<V>,G extends UnDiGraph<V,E>> |
The Girvan–Newman algorithm detects communities in complex systems.
|
GirvanNewman.EdgeBetweenessCtor<V> |
This allows customization of the computation of edge-betweeness.
|
GirvanNewmanUnDiGraph<V,E extends UndirectedEdge<V>> |
|
GivensMatrix |
Givens rotation is a rotation in the plane spanned by two coordinates axes.
|
Glejser |
The Glejser test tests for conditional heteroskedasticity.
|
GLMBeta |
This is the estimate of beta, β^, in a Generalized Linear Model.
|
GLMBinomial |
This is the Binomial distribution of the error distribution in GLM model.
|
GLMExponentialDistribution |
This interface represents a probability distribution from the exponential family.
|
GLMFamily |
Family provides a convenient way to specify the error distribution
and link function used in GLM model.
|
GLMFitting |
This interface represents a fitting method for estimating β in a
Generalized Linear Model (GLM).
|
GLMGamma |
This is the Gamma distribution of the error distribution in GLM model.
|
GLMGaussian |
This is the Gaussian distribution of the error distribution in GLM model.
|
GLMInverseGaussian |
This is the Inverse Gaussian distribution of the error distribution in GLM model.
|
GLMModelSelection |
Given a set of observations {y, X}, we would like to construct a GLM to explain the data.
|
GLMModelSelection.ModelNotFound |
Throw a ModelNotFound exception when fail to construct a model to
explain the data.
|
GLMPoisson |
This is the Poisson distribution of the error distribution in GLM model.
|
GLMProblem |
This is a Generalized Linear regression problem.
|
GLMResiduals |
Residual analysis of the results of a Generalized Linear Model regression.
|
GlobalSearchByLocalMinimizer |
This minimizer is a global optimization method.
|
GoldenMinimizer |
This is the golden section univariate minimization algorithm.
|
GoldfeldQuandtTrotter |
Goldfeld, Quandt and Trotter propose the following way to coerce a non-positive definite Hessian
matrix to become symmetric, positive definite.
|
GolubKahanSVD |
Golub-Kahan algorithm does the SVD decomposition of a tall matrix in two stages.
|
GomoryMixedCutMinimizer |
This cutting-plane implementation uses Gomory's mixed cut method.
|
GomoryMixedCutMinimizer.MyCutter |
This is Gomory's mixed cut.
|
GomoryPureCutMinimizer |
This cutting-plane implementation uses Gomory's pure cut method for pure integer programming,
in which all variables are integral.
|
GomoryPureCutMinimizer.MyCutter |
This is Gomory's pure cut.
|
Gradient |
The gradient of a scalar field is a vector field which points in the direction of the greatest
rate of increase of the scalar field, and of which the magnitude is the greatest rate of change.
|
GradientFunction |
The gradient function, g(x), evaluates the gradient of a real scalar function f at a point x.
|
GramSchmidt |
The Gram-Schmidt process is a method for orthogonalizing a set of vectors in an inner product space.
|
Graph<V,E extends HyperEdge<V>> |
A graph is a representation of a set of objects where some pairs of the objects are connected by
links.
|
GraphTraversal<V> |
A spanning tree T of a connected, undirected graph G is a tree composed of all the
vertices and some (or perhaps all) of the edges of G.
|
GraphTraversal.Node<V> |
This is a node in a spanning tree.
|
GraphUtils |
These are the utility functions to manipulate Graph .
|
GraphUtils.EdgeFactory<V,N,E extends Edge<N>,X> |
This interface specifies how an edge is created for two nodes.
|
GraphUtils.GraphFactory<G> |
The factory to construct instances of the graph type.
|
GreaterThanConstraints |
The domain of an optimization problem may be restricted by greater-than or equal-to constraints.
|
GridSearchCetaMaximizer |
Searches (by brute force) for the maximal point of C(η) among a
grid of values.
|
GridSearchMinimizer |
This performs a grid search to find the minimum of a univariate function.
|
GridSearchMinimizer.GridDefinition |
|
GroupResampler |
|
GroupResamplerFactory |
Creates re-samplers that do re-sampling for the whole group of stocks
together.
|
GSAAcceptanceProbabilityFunction |
The GSA acceptance probability function.
|
GSAAnnealingFunction |
The GSA proposal/annealing function.
|
GSATemperatureFunction |
The GSA temperature function.
|
GumbelDistribution |
The Gumbel distribution is a special case (Type I) of the generalized extreme value distribution,
with \(\xi=0\).
|
HalleyRoot |
Halley's method is an iterative root finding method for a univariate function
with a continuous second derivative, i.e., a C2 function.
|
HarveyGodfrey |
The Harvey-Godfrey test tests for conditional heteroskedasticity.
|
HConstruction<T extends Comparable<? super T>> |
A construction of extreme and trade points based on H discretization,
ignoring changes smaller than H.
|
HeatEquation1D |
A one-dimensional heat equation (or diffusion equation) is a parabolic PDE that takes the
following form.
|
HeatEquation2D |
A two-dimensional heat equation (or diffusion equation) is a parabolic PDE that takes the
following form.
|
HermitePolynomials |
A Hermite polynomial is defined by the recurrence relation below.
|
HermiteRule |
|
Hessenberg |
An upper Hessenberg matrix is a square matrix which has zero entries below the first
sub-diagonal.
|
HessenbergDecomposition |
Given a square matrix A, we find Q such that Q' * A * Q = H where
H is a Hessenberg matrix.
|
HessenbergDeflationSearch |
Given a Hessenberg matrix, this class searches the largest unreduced Hessenberg sub-matrix.
|
Hessian |
The Hessian matrix is the square matrix of the second-order partial derivatives of a multivariate function.
|
HessianFunction |
The Hessian function, H(x), evaluates the Hessian of a real scalar function f at a point x.
|
Heteroskedasticity |
A heteroskedasticity test tests, for a linear regression model,
whether the estimated variance of the residuals from a regression is dependent on the values of the independent variables (regressors).
|
HiddenMarkovModel |
|
HilbertMatrix |
A Hilbert matrix, H, is a symmetric matrix with entries being the unit fractions
H[i][j] = 1 / (i + j -1)
|
HilbertSpace<H,F extends Field<F> & Comparable<F>> |
A Hilbert space is an inner product space, an abstract vector space in which distances and angles can be measured.
|
HmmInnovation |
An HMM innovation consists of a state and an observation in the state.
|
HMMRNG |
In a (discrete) hidden Markov model, the state is not directly visible, but
output, dependent on the state, is visible.
|
HomogeneousPathFollowingMinimizer |
This implementation solves a Semi-Definite Programming problem using the Homogeneous Self-Dual
Path-Following algorithm.
|
HornerScheme |
Horner scheme is an algorithm for the efficient evaluation of polynomials in monomial form.
|
Householder4SubVector |
Faster implementation of Householder reflection for sub-vectors at a given index.
|
Householder4ZeroGenerator |
Faster implementation of Householder reflection for zero generator vector.
|
HouseholderContext |
This is the context information about a Householder transformation.
|
HouseholderInPlace |
Maintains the matrix to be transformed by a sequence of Householder reflections.
|
HouseholderInPlace.Householder |
|
HouseholderQR |
Successive Householder reflections gradually transform a matrix A to the upper triangular
form.
|
HouseholderReflection |
A Householder transformation in the 3-dimensional space is the reflection of a vector in the
plane.
|
Hp |
This is the symmetrization operator as defined in equation (6) in the reference.
|
HuangMinimizer |
Huang's updating formula is a family of formulas which encompasses
the rank-one, DFP, BFGS as well as some other formulas.
|
HybridMCMC |
This class implements a hybrid MCMC algorithm.
|
HybridMCMCProposalFunction |
|
HyperEdge<V> |
A hyper-edge connects a set of vertices of any size.
|
HypersphereRVG |
Generates uniformly distributed points on the surface of a hypersphere.
|
HypothesisTest |
A statistical hypothesis test is a method of making decisions using experimental data.
|
IdentityHashSet<T> |
This class implements the Set interface with a hash table, using reference-equality in place of
object-equality when comparing keys and values.
|
IdentityPreconditioner |
This identity preconditioner is used when no preconditioning is applied.
|
IID |
An i.i.d.
|
ILPBranchAndBoundMinimizer |
This is a Branch-and-Bound algorithm that solves Integer Linear Programming problems.
|
ILPBranchAndBoundMinimizer.ActiveListFactory |
This factory constructs a new instance of ActiveList for each Integer Linear Programming problem.
|
ILPNode |
This is the branch-and-bound node used in conjunction with ILPBranchAndBoundMinimizer to
solve an Integer Linear Programming problem.
|
ILPProblem |
A linear program in real variables is said to be integral if it has at least one optimal solution which is integral.
|
ILPProblemImpl1 |
This implementation is an ILP problem, in which the variables can be real or integral.
|
ImmutableMatrix |
This is a read-only view of a Matrix instance.
|
ImmutableVector |
This is a read-only view of a Vector instance.
|
ImplicitModelPCA |
Given a (de-meaned) time series of vectored observations, we decompose them
into a reduced dimension of linear sum of implicit factors.
|
ImplicitModelPCA.Result |
the regression results
|
ImportanceSampling |
Importance sampling is a general technique for estimating properties of a
particular distribution, while only having samples generated from a different
distribution rather than the distribution of interest.
|
IndependentCoVAR |
This algorithm finds the independent variables based on the covariance matrix.
|
Infantino2010PCA |
The objective is to predict the next H-period accumulated returns from the
past H-period
dimensionally reduced returns.
|
Infantino2010PCA.Signal |
|
Infantino2010Regime |
Detects the current regime (mean reversion or momentum) by cross-sectional volatility.
|
Infantino2010Regime.Regime |
|
InitialsFactory |
Some optimization algorithms, e.g., Nelder-Mead, Differential-Evolution, require a set of initial points to work with.
|
InnerProduct |
The Frobenius inner product is the component-wise inner product of two matrices as though they are vectors.
|
InnovationsAlgorithm |
The innovations algorithm is an efficient way to obtain a one step least square linear predictor
for a univariate linear time series with known auto-covariance and these properties (not limited
to ARMA processes):
{xt} can be non-stationary.
E(xt) = 0 for all t.
This class implements the part of the innovations algorithm that computes the prediction error
variances, v and prediction coefficients θ.
|
Integral |
The class represents an integral of a function, in the Lebesgue sense.
|
IntegralConstrainedCellFactory |
This implementation defines the constrained Differential Evolution operators that solve an Integer Programming problem.
|
IntegralConstrainedCellFactory.AllIntegers |
This integral constraint makes all variables in the objective function integral variables.
|
IntegralConstrainedCellFactory.IntegerConstraint |
The integral constraints are defined by implementing this interface .
|
IntegralConstrainedCellFactory.SomeIntegers |
This integral constraint makes some variables in the objective function integral variables.
|
IntegralDB |
This class evaluates the following class of integrals.
|
IntegralDt |
This class evaluates the following class of integrals.
|
IntegralExpectation |
This class computes the expectation of the following class of integrals.
|
Integrator |
This defines the interface for the numerical integration of definite integrals of univariate functions.
|
Interpolation |
Interpolation is a method of constructing new data points within the range of
a discrete set of known data points.
|
Interval<T extends Comparable<? super T>> |
For a partially ordered set, there is a binary relation, denoted as ≤, that indicates that,
for certain pairs of elements in the set, one of the elements precedes the other.
|
IntervalRelation |
Allen's Interval Algebra is a calculus for temporal reasoning that was introduced by James F.
|
Intervals<T extends Comparable<? super T>> |
This is a disjoint set of intervals.
|
IntTimeTimeSeries |
This is a univariate time series indexed by integers.
|
IntTimeTimeSeries.Entry |
This is the TimeSeries.Entry for an integer number -indexed univariate time series.
|
InvalidLicense |
This is the LicenseError thrown when calling a class or method that is not yet licensed.
|
Inverse |
For a square matrix A, the inverse, A-1, if
exists, satisfies
A.multiply(A.inverse()) == A.ONE()
There are multiple ways to compute the inverse of a matrix.
|
InverseIteration |
Inverse iteration is an iterative eigenvalue algorithm.
|
InverseIteration.StoppingCriterion |
This interface defines the convergence criterion.
|
InverseTransformSampling |
Inverse transform sampling (also known as inversion sampling, the inverse probability integral
transform, the inverse transformation method, Smirnov transform, golden rule, etc.)
is a basic method for pseudo-random number sampling,
i.e.
|
InverseTransformSamplingEVDRNG |
Generate random numbers according to a given univariate extreme value distribution, by
inverse transform sampling.
|
InverseTransformSamplingExpRNG |
This is a pseudo random number generator that samples from the exponential
distribution using the inverse transform sampling method.
|
InverseTransformSamplingGammaRNG |
Deprecated.
|
InverseTransformSamplingTruncatedNormalRNG |
A random variate x defined as
\[
x = \Phi^{-1}( \Phi(\alpha) + U\cdot(\Phi(\beta)-\Phi(\alpha)))\sigma + \mu
\]
with \(\Phi\) the cumulative distribution function and \(\Phi^{-1}\) its inverse, U a
uniform random number on (0, 1), follows the distribution truncated to the range (a,
b).
|
InvertingVariable |
This is the inverting-variable transformation.
|
IPMinimizer<T extends IPProblem,S extends MinimizationSolution<Vector>> |
An Integer Programming minimizer minimizes an objective function subject to equality/inequality
constraints as well as integral constraints.
|
IPProblem |
An Integer Programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers.
|
IPProblemImpl1 |
This is an implementation of a general Integer Programming problem in which some variables take only integers.
|
IteratesMonitor<S> |
|
IterationBody<T> |
This interface defines the code snippet to be run in parallel.
|
IterationMonitor<S> |
To debug an iterative algorithm, such as in IterativeMethod , it is
useful to keep track of the all states generated in the iterations.
|
IterativeC2Maximizer |
A maximization problem is simply minimizing the negative of the objective function.
|
IterativeC2Maximizer.Solution |
|
IterativeC2Minimizer |
This is a minimizer that minimizes a twice continuously differentiable, multivariate function.
|
IterativeCentralDifference |
An iterative central difference algorithm to obtain a numerical approximation to Poisson's
equations with Dirichlet boundary conditions.
|
IterativeIntegrator |
An iterative integrator computes an integral by a series of sums, which approximates the value of the integral.
|
IterativeLinearSystemSolver |
An iterative method for solving an N-by-N (or non-square) linear system
Ax = b involves a sequence of matrix-vector multiplications.
|
IterativeLinearSystemSolver.Solution |
This is the solution to a system of linear equations using an iterative
solver.
|
IterativeMethod<S> |
An iterative method is a mathematical procedure that generates a sequence of
improving approximate solutions for a class of problems.
|
IterativeMinimizer<P extends OptimProblem> |
This is an iterative multivariate minimizer.
|
IterativeSolution<S> |
Many minimization algorithms work by starting from some given initials and iteratively moving
toward an approximate solution.
|
IWLS |
This implementation estimates parameters β in a GLM model using the Iteratively
Re-weighted Least Squares algorithm.
|
Jacobian |
The Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function.
|
JacobianFunction |
The Jacobian function, J(x), evaluates the Jacobian of a real vector-valued function f at a point x.
|
JacobiPreconditioner |
The Jacobi (or diagonal) preconditioner is one of the simplest forms of
preconditioning, such that the preconditioner is the diagonal of
the coefficient matrix, i.e., P = diag(A).
|
JacobiSolver |
The Jacobi method solves sequentially n equations in a linear
system Ax = b in isolation in each iteration.
|
JarqueBera |
The Jarque-Bera test is a goodness-of-fit measure of departure from normality, based on the sample kurtosis and skewness.
|
JarqueBeraDistribution |
Jarque-Bera distribution is the distribution of the Jarque-Bera statistics, which measures the departure from normality.
|
JenkinsTraubReal |
The Jenkins-Traub algorithm is a fast globally convergent iterative method for solving for polynomial roots.
|
JohansenAsymptoticDistribution |
|
JohansenAsymptoticDistribution.F |
This is a filtration function.
|
JohansenAsymptoticDistribution.Test |
the available types of Johansen cointegration tests
|
JohansenAsymptoticDistribution.TrendType |
the available types of trends
|
JohansenTest |
The maximum number of cointegrating relations among a multivariate time
series is the rank of the Π matrix.
|
JordanExchange |
Jordan Exchange swaps the r-th entering variable (row) with the s-th leaving
variable (column) in a matrix A.
|
Kagi<T extends Comparable<? super T>> |
KAGI construction of a random process.
|
Kagi.Trend |
|
KagiModel |
Maintains the states of a KAGI model.
|
KendallRankCorrelation |
The Kendall rank correlation coefficient, commonly referred to as Kendall's tau (τ)
coefficient, is a statistic used to measure the association between two measured quantities.
|
Kernel |
The kernel or null space (also nullspace) of a matrix A is the set of all vectors x for which Ax = 0.
|
Kernel.Method |
These are the available methods to compute kernel basis.
|
KnightSatchellTran1995 |
Implements the Knight-Satchell-Tran model of financial asset returns.
|
KnightSatchellTran1995MLE |
Fits a KST model from returns.
|
Knuth1969 |
This is a random number generator that generates random deviates according to
the Poisson distribution.
|
KolmogorovDistribution |
The Kolmogorov distribution is the distribution of the Kolmogorov-Smirnov statistic.
|
KolmogorovOneSidedDistribution |
Compute the probability that F(x) is dominated by the upper confidence contour, for all x:
Pn(ε) = Pr{F(x) < min{Fn(x) + ε, 1}}
|
KolmogorovSmirnov |
The Kolmogorov-Smirnov test (KS test) compares a sample with a reference probability distribution (one-sample KS test),
or to compare two samples (two-sample KS test).
|
KolmogorovSmirnov.Side |
the available types of the Kolmogorov-Smirnov statistic
|
KolmogorovSmirnov.Type |
the available types of the Kolmogorov-Smirnov tests
|
KolmogorovSmirnov1Sample |
The one-sample Kolmogorov-Smirnov test (one-sample KS test) compares a sample with a reference probability distribution.
|
KolmogorovSmirnov2Samples |
The two-sample Kolmogorov-Smirnov test (two-sample KS test) tests for the equality of the
distributions of two samples.
|
KolmogorovTwoSamplesDistribution |
Compute the p-values for the generalized (conditionally distribution-free) Smirnov homogeneity test.
|
KolmogorovTwoSamplesDistribution.Side |
the available types of Kolmogorov-Smirnov two-sample test
|
KroneckerProduct |
Given an m-by-n matrix A and a p-by-q matrix B,
their Kronecker product C, also called their matrix direct product, is
an (mp)-by-(nq) matrix with entries defined by
cst = aij bkl
where
|
KruskalWallis |
The Kruskal-Wallis test is a non-parametric method for testing the equality of population medians among groups.
|
KunduGupta2007 |
Kundu-Gupta propose a very convenient way to generate gamma random variables
using generalized exponential distribution, when the shape parameter lies
between 0 and 1.
|
Kurtosis |
Kurtosis measures the "peakedness" of the probability distribution of a real-valued random
variable.
|
LaguerrePolynomials |
Laguerre polynomials are defined by the recurrence relation below.
|
LaguerreRule |
|
Lai2010NPEBModel |
The Non-Parametric Empirical Bayes (NPEB) model described in the reference
computes the optimal weights for asset allocation.
|
Lai2010NPEBModel.OptimalWeights |
|
Lai2010OptimizationAlgorithm |
|
Lanczos |
The Lanczos approximation is a method for computing the Gamma function numerically, published by Cornelius Lanczos in 1964.
|
LARSFitting |
This class computes the entire LARS sequence of coefficients and fits,
starting from zero to the OLS fit.
|
LARSFitting.Estimators |
|
LARSProblem |
Least Angle Regression (LARS) is a regression algorithm for high-dimensional
data.
|
LDDecomposition |
Represents a L D LT decomposition of a shifted symmetric tridiagonal matrix
T.
|
LDFactorizationFromRoot |
Decomposes (T - σ I) into L D LT where T is a symmetric
tridiagonal matrix, σ is a shift for this factorization, L is a unit lower
triangular matrix, and D is a diagonal matrix.
|
LDLt |
The LDL decomposition decomposes a real and symmetric (hence square) matrix A into A = L * D * Lt.
|
LeapFrogging |
The leap-frogging algorithm is a method for simulating Molecular Dynamics, which is
time-reversible.
|
LeapFrogging.DynamicsState |
Contains the entire state (both the position and the momentum) at a given point in time.
|
LeastPth<T> |
The least p-th minmax algorithm minimizes the maximal error/loss (function):
\[
\min_x \max_{\omega \in S} e(x, \omega)
\]
\(e(x, \omega)\) is the error or loss function.
|
LeastSquares |
This method obtains a least squares estimate of a polynomial to fit the input
data, by a weighted sum of orthogonal polynomials up to a specified order.
|
LeastSquares.Weighting |
This interface defines a weighting for observations.
|
Lebesgue |
Lebesgue integration is the general theory of integration of a function with respect to a general measure.
|
LEcuyer |
This is the uniform random number generator recommended by L'Ecuyer in 1996.
|
LedoitWolf2004 |
To estimate the covariance matrix, Ledoit and Wolf (2004) suggests using the
matrix obtained from the sample covariance matrix through a transformation
called shrinkage.
|
LedoitWolf2004.Result |
The estimator and some intermediate values computed by the algorithm.
|
LedoitWolf2016 |
This is Ledoit's non-linear shrinkage method for computing covariance
matrixes when the dimension is large compared to the number of observations.
|
LedoitWolf2016.Result |
the estimator and some intermediate values computed by the algorithm
|
LegendrePolynomials |
A Legendre polynomial is defined by the recurrence relation below.
|
LegendreRule |
|
Lehmer |
Lehmer proposed a general linear congruential generator that generates pseudo-random numbers in
[0, 1].
|
LessThanConstraints |
The domain of an optimization problem may be restricted by less-than or equal-to constraints.
|
Levene |
The Levene test tests for the equality of variance of groups.
|
Levene.Type |
the available implementations when computing the absolute deviations
|
License |
This is the license management system for the library.
|
LicenseError |
General error regarding the license, e.g., errors when loading license.
|
Lilliefors |
Lilliefors test tests the null hypothesis that data come from a normally distributed population
with an estimated sample mean and variance.
|
LILSparseMatrix |
The list of lists (LIL) format for sparse matrix stores one list per row,
where each entry stores a column index and value.
|
LinearCongruentialGenerator |
A linear congruential generator (LCG) produces a sequence of pseudo-random numbers
based on a linear recurrence relation.
|
LinearConstraints |
This is a collection of linear constraints for a real-valued optimization problem.
|
LinearEqualityConstraints |
This is a collection of linear equality constraints.
|
LinearFit |
Find the parameters for the ACER function from the given empirical epsilon, using OLS regression
on the logarithm of the values.
|
LinearGreaterThanConstraints |
This is a collection of linear greater-than-or-equal-to constraints.
|
LinearInterpolation |
(Piecewise-)Linear interpolation fits a curve by interpolating linearly
between two adjacent data-points.
|
LinearInterpolator |
Define a univariate function by linearly interpolating between adjacent points.
|
LinearKalmanFilter |
The Kalman filter, also known as linear quadratic estimation (LQE),
is an algorithm which uses a series of measurements observed over time,
containing noise (random variations) and other inaccuracies,
and produces estimates of unknown variables that tend to be more precise than those that would be
based on a single measurement alone.
|
LinearLessThanConstraints |
This is a collection of linear less-than-or-equal-to constraints.
|
LinearModel |
A linear model provides fitting and the residual analysis (goodness of fit).
|
LinearRepresentation |
The linear representation of an Autoregressive Moving Average (ARMA) model is a (truncated) infinite sum of AR terms.
|
LinearRoot |
This is a solver for finding the roots of a linear equation.
|
LinearSystemSolver |
Solve a system of linear equations in the form:
Ax = b,
We assume that, after row reduction, A has no more rows than columns.
|
LinearSystemSolver.NoSolution |
This is the runtime exception thrown when it fails to solve a system of
linear
equations.
|
LinearSystemSolver.Solution |
This is the solution to a linear system of equations.
|
LineSearch |
A line search is often used in another minimization algorithm to improve the current solution in one iteration step.
|
LineSearch.Solution |
This is the solution to a line search minimization.
|
LineSegment |
Represent a line segment.
|
LinkCloglog |
This class represents the complementary log-log link function:
g(x) = log(-log(1 - x))
|
LinkFunction |
This interface represents a link function g(x) in Generalized Linear Model (GLM).
|
LinkIdentity |
This class represents the identity link function:
g(x) = x
|
LinkInverse |
This class represents the inverse link function:
g(x) = 1/x
|
LinkInverseSquared |
This class represents the inverse-squared link function:
g(x) = 1/x2
|
LinkLog |
This class represents the log link function:
g(x) = log(x)
|
LinkLogit |
This class represents the logit link function:
\[
g(x) = \log(\frac{\mu}{1-\mu})
\]
|
LinkProbit |
This class represents the Probit link function,
which is the inverse of cumulative distribution function of the standard Normal distribution N(0, 1).
|
LinkSqrt |
This class represents the square-root link function:
g(x) = sqrt(x)
|
LjungBox |
The Ljung-Box test (named for Greta M.
|
LMBeta |
Beta coefficients are the outcomes of fitting a linear regression model.
|
LMDiagnostics |
This class collects some diagnostics measures for the goodness of fit based on the residulas for
a linear regression model.
|
LMInformationCriteria |
The information criteria measure the goodness of fit of an estimated statistical model.
|
LMProblem |
This is a linear regression or a linear model (LM) problem.
|
LMResiduals |
This is the residual analysis of the results of a linear regression model.
|
LocalDateInterval |
Represents an interval between two LocalDate .
|
LocalDateTimeInterval |
Represents an interval between two LocalDateTime .
|
LocalDateTimeUtils |
|
LocalSearchCellFactory<P extends OptimProblem,T extends IterativeMinimizer<OptimProblem>> |
|
LocalSearchCellFactory.MinimizerFactory<U extends IterativeMinimizer<OptimProblem>> |
This factory constructs a new Minimizer for each mutation operation.
|
LogBeta |
This class represents the log of Beta function log(B(x, y)) .
|
LogGamma |
The log-Gamma function, \(\log (\Gamma(z))\), for positive real numbers, is the log of the Gamma function.
|
LogGamma.Method |
the available methods to compute \(\log (\Gamma(z))\)
|
LogisticBeta |
Beta coefficient estimator, β^, of a logistic regression model.
|
LogisticProblem |
A logistic regression problem is a variation of the OLS regression problem.
|
LogisticRegression |
A logistic regression (sometimes called the logistic model or logit model) is used for prediction
of the probability of occurrence of an event by fitting data to a logit function logistic curve.
|
LogisticResiduals |
Residual analysis of the results of a logistic regression.
|
LogNormalDistribution |
A log-normal distribution is a probability distribution of a random variable whose logarithm is normally distributed.
|
LogNormalMixtureDistribution |
The HMM states use the Log-Normal distribution to model the observations.
|
LogNormalMixtureDistribution.Lambda |
the log-normal distribution parameters
|
LogNormalRNG |
This random number generator samples from the log-normal distribution.
|
LoopBody |
The implementation of this interface contains the code inside a for-loop
construct.
|
LowerBoundConstraints |
This is a lower bound constraints such that for all xi's,
xi ≥ b
|
LowerTriangularMatrix |
A lower triangular matrix has 0 entries where column index > row index.
|
LPBoundedMinimizer |
This is the solution to a bounded linear programming problem.
|
LPCanonicalProblem1 |
This is a linear programming problem in the 1st canonical form (following the convention in the reference):
min c'x
s.t.
|
LPCanonicalProblem2 |
This is a linear programming problem in the 2nd canonical form (following the convention in the wiki):
min c'x
s.t.
|
LPCanonicalSolver |
This is an LP solver that solves a canonical LP problem in the following form.
|
LPDimensionNotMatched |
This is the exception thrown when the dimensions of the objective function and constraints of a linear programming problem are inconsistent.
|
LPEmptyCostVector |
This is the exception thrown when there is no objective function in a linear programming problem.
|
LPException |
This is the exception thrown when there is any problem when solving a linear programming problem.
|
LPInfeasible |
This is the exception thrown when the LP problem is infeasible, i.e., no solution.
|
LPMinimizer |
An LP minimizer minimizes the objective of an LP problem, satisfying all the constraints.
|
LPNoConstraint |
This is the exception thrown when there is no linear constraint found for the LP problem.
|
LPProblem |
A linear programming (LP) problem minimizes a linear objective function subject to a collection of linear constraints.
|
LPProblemImpl1 |
This is an implementation of a linear programming problem, LPProblem .
|
LPRevisedSimplexSolver |
|
LPRevisedSimplexSolver.Problem |
|
LPRuntimeException |
This is the exception thrown when there is any problem when constructing a linear programming problem.
|
LPSimplexMinimizer |
A simplex LP minimizer can be read off from the solution simplex table.
|
LPSimplexSolution |
The solution to a linear programming problem using a simplex method contains an LPSimplexMinimizer .
|
LPSimplexSolver<P extends LPProblem> |
A simplex solver works toward an LP solution by sequentially applying Jordan exchange to a simplex table.
|
LPSolution<T extends LPMinimizer> |
A solution to an LP problem contains all information about solving an LP problem such as
whether the problem has a solution (bounded), how many minimizers it has, and the minimum.
|
LPSolver<P extends LPProblem,S extends LPSolution<?>> |
An LP solver solves a Linear Programming (LP) problem.
|
LPStandardProblem |
This is a linear programming problem in the standard form:
min c'x
s.t.
|
LPTwoPhaseSolver |
This implementation solves a linear programming problem, LPProblem , using a two-step approach.
|
LPUnbounded |
This is the exception thrown when the LP problem is unbounded.
|
LPUnboundedMinimizer |
This is the solution to an unbounded linear programming problem.
|
LPUnboundedMinimizerScheme2 |
This is the solution to an unbounded linear programming problem found in scheme 2.
|
LSProblem |
This is the problem of solving a system of linear equations.
|
LU |
LU decomposition decomposes an n x n matrix A so that P * A = L * U.
|
LUDecomposition |
LU decomposition decomposes an n x n matrix A so that P * A = L * U.
|
LUSolver |
Use LU decomposition to solve Ax = b where A is square and
det(A) != 0.
|
MADecomposition |
This class decomposes a time series into the trend, seasonal and stationary random components
using the Moving Average Estimation method with symmetric window.
|
MAModel |
This class represents a univariate MA model.
|
MarketImpact1 |
Constructs the constraint coefficient arrays of a market impact term in the
compact form.
|
MarkowitzByCLM |
Solves for the optimal weights in the Markowitz formulation by critical line
method.
|
MarkowitzByQP |
Modern portfolio theory (MPT) is a theory of investment which attempts to
maximize portfolio expected return for a given amount of portfolio risk, or
equivalently minimize risk for a given level of expected return, by carefully
choosing the proportions of various assets.
|
MarkowitzCriticalLine |
|
MARMAModel |
Simulation of max autoregressive moving average processes, i.e., MARMA(p, q) processes.
|
MARMASim |
Generate random numbers based on a given MARMA model.
|
MARModel |
This is equivalent to MARMA(p, 0).
|
MarsagliaBray1964 |
The polar method (attributed to George Marsaglia, 1964) is a pseudo-random number sampling method
for generating a pair of independent standard normal random variables.
|
MarsagliaTsang2000 |
Marsaglia-Tsang is a procedure for generating a gamma variate as the cube of a suitably scaled
normal variate.
|
MAT |
MAT is the inverse operator of SVEC .
|
MathTable |
A mathematical table consists of numbers showing the results of calculation with varying
arguments.
|
Matrix |
This interface defines a Matrix as a Ring , a Table ,
and a few more methods not already defined in its mathematical definition.
|
MatrixAccess |
This interface defines the methods for accessing entries in a matrix.
|
MatrixAccessException |
This is the runtime exception thrown when trying to access an invalid entry in a matrix, e.g., A[0, 0].
|
MatrixCoordinate |
The location of a matrix entry is specified by a 2D coordinates (i, j), where i and
j are the row-index and column-index of the entry respectively.
|
MatrixFactory |
These are the utility functions to create a new matrix/vector from existing ones.
|
MatrixMathOperation |
This interface defines some standard operations for generic matrices.
|
MatrixMeasure |
A measure, μ, of a matrix, A, is a map from the Matrix space to the Real line.
|
MatrixMismatchException |
This is the runtime exception thrown when an operation acts on matrices that have incompatible dimensions.
|
MatrixPropertyUtils |
These are the boolean operators that take matrices or vectors and check if they satisfy a
certain property.
|
MatrixRing |
A matrix ring is the set of all n × n matrices over an arbitrary Ring R.
|
MatrixRootByDiagonalization |
The square root of a matrix extends the notion of square root from numbers to matrices.
|
MatrixSingularityException |
This is the runtime exception thrown when an operation acts on a singular matrix, e.g., applying LU decomposition to a singular matrix.
|
MatrixTable |
A matrix is represented by a rectangular table structure with accessors.
|
MatrixUtils |
These are the utility functions to apply to matrices.
|
MatthewsDavies |
Matthews and Davies propose the following way to coerce a non-positive definite Hessian matrix to
become symmetric, positive definite.
|
Max |
The maximum of a sample is the biggest value in the sample.
|
MaximaDistribution |
The distribution of \(M\), where \(M=\max(x_1,x_2,...,x_n)\) and \(x_i\)'s are iid samples drawn
from of a random variable \(X\) with cdf \(F(x)\).
|
MaximizationSolution<T> |
This is the solution to a maximization problem.
|
MaximumLikelihoodFitting |
This interface defines model fitting by maximum likelihood algorithm.
|
Maxmizer<P extends OptimProblem,S extends MaximizationSolution<?>> |
This interface represents an optimization algorithm that maximizers a real valued objective
function, one or multi dimension.
|
McCormickMinimizer |
Deprecated.
|
MCLNiedermayer |
Implements Markowitz's critical line algorithm.
|
MCUtils |
These are the utility functions to examine a Markov chain.
|
Mean |
The mean of a sample is the sum of all numbers in the sample,
divided by the sample size.
|
MeanEstimator |
|
MeanEstimatorMaxLevelShift |
|
MeanPriceEstimator |
Defines how to estimate the mean price.
|
MersenneExponent |
The value of a Mersenne Exponent p is a parameter for creating a Mersenne-Twister random
number generator with a period of 2p.
|
MersenneTwister |
Mersenne Twister is one of the best pseudo random number generators
available.
|
MersenneTwisterParam |
|
MersenneTwisterParamSearcher |
Searches for Mersenne-Twister parameters.
|
Metropolis |
This basic Metropolis implementation assumes using symmetric proposal function.
|
MetropolisAcceptanceProbabilityFunction |
Uses the classic Metropolis rule, f_{t+1}/f_t.
|
MetropolisHastings |
A generalization of the Metropolis algorithm, which allows asymmetric proposal
functions.
|
MetropolisHastings.ProposalDensityFunction |
Defines the density of a proposal function, i.e.
|
MetropolisUtils |
Utility functions for Metropolis algorithms.
|
Midpoint |
The midpoint rule computes an approximation to a definite integral,
made by finding the area of a collection of rectangles whose heights are determined by the values of the function.
|
MilsteinSDE |
Milstein scheme is a first-order approximation to a continuous-time SDE.
|
Min |
The minimum of a sample is the smallest value in the sample.
|
MinimaDistribution |
The distribution of \(M\), where \(M=\min(x_1,x_2,...,x_n)\) and \(x_i\)'s are iid samples drawn
from of a random variable \(X\) with cdf \(F(x)\).
|
MinimalResidualSolver |
The Minimal Residual method (MINRES) is useful for solving a symmetric n-by-n linear system
(possibly indefinite or singular).
|
MinimizationSolution<T> |
This is the solution to a minimization problem.
|
Minimizer<P extends OptimProblem,S extends MinimizationSolution<?>> |
This interface represents an optimization algorithm that minimizes a real valued objective
function, one or multi dimension.
|
MinimumWeights |
This constraint puts lower bounds on weights.
|
MinMaxMinimizer<T> |
A minmax minimizer minimizes a minmax problem.
|
MinMaxProblem<T> |
A minmax problem is a decision rule used in decision theory, game theory, statistics and philosophy for minimizing the possible loss while maximizing the potential gain.
|
MixedRule |
The mixed rule is good for functions that fall off rapidly at infinity, e.g., \(e^{x^2}\) or \(e^x\)
The integral region is \((0, +\infty)\).
|
MixtureDistribution |
This is the conditional distribution of the observations in each state
(possibly differently parameterized) of a mixture hidden Markov model.
|
MixtureHMM |
This is the mixture hidden Markov model (HMM).
|
MixtureHMMEM |
The EM algorithm is used to find the unknown parameters of a hidden Markov
model (HMM) by making use of the forward-backward algorithm.
|
MixtureHMMEM.TrainedModel |
the result of the EM algorithm
|
MMAModel |
This is equivalent to MARMA(0, q).
|
ModelResamplerFactory |
|
Moments |
Compute the central moment of a data set incrementally.
|
MomentsEstimatorLedoitWolf |
|
Monoid<G> |
A monoid is a group with a binary operation (×), satisfying the group axioms:
closure
associativity
existence of multiplicative identity
|
MovingAverage |
This applies a linear filter to a univariate time series using the moving average estimation.
|
MovingAverage.Side |
the available types of moving average filtering
|
MovingAverageByExtension |
This implements a moving average filter with these properties:
1) both past and future observations are used in smoothing;
2) the head is prepended with the first element in the inputs (x_t = x_1 for t < 1);
3) the tail is appended with the last element in the inputs (x_t = x_n for t > n).
|
MR3 |
Computes eigenvalues and eigenvectors of a given symmetric tridiagonal matrix T using
"Algorithm of Multiple Relatively Robust Representations" (MRRR).
|
MRG |
A Multiple Recursive Generator (MRG) is a linear congruential generator which takes this form:
|
MRModel |
A Mean Reversion Model computes the target position given the current price.
|
MRModelRanged |
|
MultiCubicSpline |
Implementation of natural cubic spline interpolation for an arbitrary number of dimensions.
|
MultiDimensionalArray<T> |
A generic multi-dimensional array, with an arbitrary number of dimensions.
|
MultiDimensionalCollection<T> |
A generic collection with an arbitrary number of dimensions.
|
MultiDimensionalGrid |
An arbitrary dimensional grid.
|
MultiDimensionalGrid.Discretization |
Specifies the discretization of an interval.
|
MultiLinearInterpolation |
Implementation of linear interpolation for an arbitrary number of dimensions.
|
MultinomialBetaFunction |
A multinomial Beta function is defined as:
\[
\frac{\prod_{i=1}^K \Gamma(\alpha_i)}{\Gamma\left(\sum_{i=1}^K
\alpha_i\right)},\qquad\boldsymbol{\alpha}=(\alpha_1,\cdots,\alpha_K)
\]
|
MultinomialDistribution |
|
MultinomialRVG |
A multinomial distribution puts N objects into K bins according
to the bins' probabilities.
|
MultipleExecutionException |
This exception is thrown when any of the parallel tasks throws an exception during execution.
|
MultiplicativeModel |
The multiplicative model of a time series is a multiplicative composite of the trend, seasonality and irregular random components.
|
MultiplierPenalty |
A multiplier penalty function allows different weights to be assigned to the constraints.
|
MultipointHybridMCMC |
A multi-point Hybrid Monte Carlo is an extension of HybridMCMC, where during the
proposal generation instead of considering only the last configuration after the dynamics
simulation, we pick a proposal from a window of the last M configurations.
|
MultivariateArrayGrid |
|
MultivariateAutoCorrelationFunction |
This is the auto-correlation function of a multi-dimensional time series {Xt}.
|
MultivariateAutoCovarianceFunction |
This is the auto-covariance function of a multi-dimensional time series {Xt},
\[
K(i, j) = E((X_i - \mu_i) \times (X_j - \mu_j)')
\]
For a stationary process, the auto-covariance depends only on the lag, |i - j|.
|
MultivariateBrownianRRG |
This is the Random Walk construction of a multivariate Brownian motion.
|
MultivariateBrownianSDE |
A multivariate Brownian motion is a stochastic process with the following properties.
|
MultivariateDiscreteSDE |
This interface represents the discrete approximation of a multivariate SDE.
|
MultivariateDLM |
This is the multivariate controlled DLM (controlled Dynamic Linear Model) specification.
|
MultivariateDLMSeries |
This is a simulator for a multivariate controlled dynamic linear model process.
|
MultivariateDLMSeries.Entry |
This is the TimeSeries.Entry for a multivariate DLM time series.
|
MultivariateDLMSim |
This is a simulator for a multivariate controlled dynamic linear model process.
|
MultivariateDLMSim.Innovation |
a simulated innovation
|
MultivariateEulerSDE |
The Euler scheme is the first order approximation of an SDE.
|
MultivariateExponentialFamily |
The exponential family is an important class of probability distributions sharing this particular
form.
|
MultivariateFiniteDifference |
A partial derivative of a multivariate function is the derivative with respect to one of the variables with the others held constant.
|
MultivariateForecastOneStep |
The innovation algorithm is an efficient way to obtain
a one step least square linear predictor for a multivariate linear time series
with known auto-covariance and these properties (not limited to ARMA processes):
{xt} can be non-stationary.
E(xt) = 0 for all t.
|
MultivariateFt |
This represents the concept 'Filtration', the information available at time t.
|
MultivariateFtWt |
This is a filtration implementation that includes the path-dependent information,
Wt.
|
MultivariateGenericTimeTimeSeries<T extends Comparable<? super T>> |
This is a multivariate time series indexed by some notion of time.
|
MultivariateGrid |
A multivariate rectilinear (not necessarily uniform) grid of double values.
|
MultivariateGridInterpolation |
Interpolation on a rectilinear multi-dimensional grid.
|
MultivariateInnovationAlgorithm |
This class implements the part of the innovation algorithm that computes the prediction error
covariances, V and prediction coefficients Θ.
|
MultivariateIntTimeTimeSeries |
This is a multivariate time series indexed by integers.
|
MultivariateIntTimeTimeSeries.Entry |
This is the TimeSeries.Entry for an integer -indexed multivariate time series.
|
MultivariateLinearKalmanFilter |
The Kalman filter, also known as linear quadratic estimation (LQE),
is an algorithm which uses a series of measurements observed over time,
containing noise (random variations) and other inaccuracies,
and produces estimates of unknown variables that tend to be more precise than those that would be based on a single measurement alone.
|
MultivariateMinimizer<P extends OptimProblem,S extends MinimizationSolution<Vector>> |
This is a minimizer that minimizes a multivariate function or a
Vector function.
|
MultivariateNormalDistribution |
The multivariate Normal distribution or multivariate Gaussian distribution, is a generalization
of the one-dimensional (univariate) Normal distribution to higher dimensions.
|
MultivariateObservationEquation |
This is the observation equation in a controlled dynamic linear model.
|
MultivariateProbabilityDistribution |
A multivariate or joint probability distribution for X, Y, ... is a probability
distribution that gives the probability that each of X, Y, ... falls in any particular
range or discrete set of values specified for that variable.
|
MultivariateRandomProcess |
This interface represents a multivariate random process a.k.a.
|
MultivariateRandomRealizationGenerator |
This interface defines a generator to construct random realizations from a multivariate stochastic process.
|
MultivariateRandomRealizationOfRandomProcess |
This class generates random realizations from a multivariate random/stochastic process.
|
MultivariateRandomWalk |
This is the Random Walk construction of a multivariate stochastic process per SDE specification.
|
MultivariateRealization |
A multivariate realization is a multivariate time series indexed by real numbers, e.g., real time.
|
MultivariateRealization.Entry |
This is the TimeSeries.Entry for a real number -indexed multivariate time series.
|
MultivariateRegularGrid |
A regular grid is a tessellation of n-dimensional Euclidean space by congruent parallelotopes
(e.g.
|
MultivariateRegularGrid.EquallySpacedVariable |
Specify the positioning and spacing along one dimension.
|
MultivariateResampler |
This is the interface of a multivariate re-sampler method.
|
MultivariateSDE |
This class represents a multi-dimensional, continuous-time Stochastic Differential Equation (SDE) of this form:
\[
dX_t = \mu(t,X_t,Z_t,...)*dt + \sigma(t, X_t, Z_t, ...)*dB_t
\]
|
MultivariateSimpleTimeSeries |
This simple multivariate time series has its vectored values indexed by integers.
|
MultivariateStateEquation |
This is the state equation in a controlled dynamic linear model.
|
MultivariateTDistribution |
The multivariate T distribution or multivariate Student distribution, is a generalization
of the one-dimensional (univariate) Student's t-distribution to higher dimensions.
|
MultivariateTimeSeries<T extends Comparable<? super T>,E extends MultivariateTimeSeries.Entry<T>> |
A multivariate time series is a sequence of vectors indexed by some notion of
time.
|
MultivariateTimeSeries.Entry<T> |
This is the TimeSeries.Entry for a multivariate time series.
|
Mutex |
Provides mutual exclusive execution of a Runnable .
|
MVOptimizer |
Solves for the optimal weight using Mean-Variance optimization.
|
MVOptimizerLongOnly |
A long-only MV optimizer.
|
MVOptimizerMinWeights |
Solves for weights with lower bounds.
|
MVOptimizerNoConstraint |
Solves for optimal weights by closed-form expressions of w(η) when
there is no limit on short selling.
|
MVOptimizerShrankMean |
Shrinks the mean towards average before passing the inputs to another
MVOptimizer.
|
MWC8222 |
Marsaglia's MWC256 (also known as MWC8222) is a multiply-with-carry generator.
|
NaiveRule |
This pivoting rule chooses the column with the most negative reduced cost.
|
NelderMeadMinimizer |
The Nelder-Mead method is a nonlinear optimization technique, which is well-defined for twice
differentiable and unimodal problems.
|
NevilleTable |
Neville's algorithm is a polynomial interpolation algorithm.
|
NewtonCotes |
The Newton-Cotes formulae, also called the Newton-Cotes quadrature rules or simply Newton-Cotes rules,
are a group of formulae for numerical integration (also called quadrature) based on evaluating the integrand at equally-spaced points.
|
NewtonCotes.Type |
There are two types of the Newton-Cotes method: OPEN and CLOSED.
|
NewtonPolynomial |
Newton polynomial is the interpolation polynomial for a given set of data points in the Newton
form.
|
NewtonRaphsonMinimizer |
The Newton-Raphson method is a second order steepest descent method that is
based on the quadratic approximation of the Taylor series.
|
NewtonRoot |
The Newton-Raphson method is as follows: one starts with an initial guess
which is reasonably close to the true root, then the function is approximated
by its tangent line (which can be computed using the tools of calculus), and
one computes the x-intercept of this tangent line (which is easily done with
elementary algebra).
|
NewtonSystemRoot |
This class solves the root for a non-linear system of equations.
|
NMSAAM |
|
NoChangeOfVariable |
This is a dummy substitution rule that does not change any variable.
|
NoConstraints |
|
NonlinearFit |
Fit log-ACER function by sequential quadratic programming (SQP) minimization (of weighted RSS),
using LinearFit 's solution as the initial guess.
|
NonlinearFit.Result |
|
NonlinearShrinkageEstimator |
The nonlinear shrinkage method for given population eigenvalues.
|
NonNegativityConstraintOptimProblem |
This is a constrained optimization problem for a function which has all non-negative variables.
|
NonNegativityConstraints |
These constraints ensures that for all variables are non-negative.
|
NoPairFoundException |
|
NormalDistribution |
The Normal distribution has its density a Gaussian function.
|
NormalMixtureDistribution |
The HMM states use the Normal distribution to model the observations.
|
NormalMixtureDistribution.Lambda |
the Normal distribution parameters
|
NormalOfExpFamily1 |
Normal distribution, univariate, unknown mean, known variance.
|
NormalOfExpFamily2 |
Normal distribution, univariate, unknown mean, unknown variance.
|
NormalRNG |
This is a random number generator that generates random deviates according to the Normal
distribution.
|
NormalRVG |
A multivariate Normal random vector is said to be p-variate normally
distributed if every linear combination of its p components has a univariate
normal distribution.
|
NoRootFoundException |
This is the Exception thrown when it fails to find a root.
|
NoShortSelling |
Weights cannot be negative.
|
NPEBPortfolioMomentsEstimator |
Uses Non-Parametric Empirical Bayes (NPEB) approach to estimate the first and
the second moments of the weighted portfolios.
|
NullMonitor<S> |
|
NumberUtils |
These are the utility functions to manipulate Number s.
|
NumberUtils.Comparable<T extends Number> |
We need a precision parameter to determine whether two numbers are close enough to be treated
as equal.
|
ObjectResampler<X> |
This is the interface of a re-sampler method for objects.
|
ObservationEquation |
This is the observation equation in a controlled dynamic linear model.
|
ODE |
An ordinary differential equation (ODE) is an equation in which there is only one independent
variable and one or more derivatives of a dependent variable with respect to the independent
variable, so that all the derivatives occurring in the equation are ordinary derivatives.
|
ODE1stOrder |
A first order ordinary differential equation (ODE) initial value problem (IVP) takes the
following form.
|
ODE1stOrderWith2ndDerivative |
Some ODE solvers require the second derivative for more accurate Taylor series approximation.
|
ODEIntegrator |
This defines the interface for the numerical integration of a
first order ODE, for a sequence of pre-defined steps.
|
ODESolution |
Solution to an ODE problem.
|
ODESolver |
|
OLSBeta |
Beta coefficient estimator, β^, of an Ordinary Least Square linear regression model.
|
OLSRegression |
(Weighted) Ordinary Least Squares (OLS) is a method for fitting a linear regression model.
|
OLSResiduals |
This is the residual analysis of the results of an ordinary linear regression model.
|
OLSSolver |
This class solves an over-determined system of linear equations in the
ordinary least square sense.
|
OLSSolverByQR |
This class solves an over-determined system of linear equations in the
ordinary least square sense.
|
OLSSolverBySVD |
This class solves an over-determined system of linear equations in the
ordinary least square sense.
|
OneDimensionTimeSeries<T extends Comparable<? super T>> |
This class constructs a univariate realization from a multivariate realization by taking one of its dimension (coordinate).
|
OneWayANOVA |
The One-Way ANOVA test tests for the equality of the means of several groups.
|
OnlineInterpolator |
An online interpolator allows dynamically adding more points for interpolation.
|
Optimizer<P,S> |
Optimization, or mathematical programming, refers to choosing the best
element from some set of available alternatives.
|
OptimProblem |
This is an optimization problem that minimizes a real valued objective
function, one or multi dimension.
|
OrderedPairs |
Cartesian products and binary relations (and hence the ubiquitous functions) are defined in terms
of ordered pairs.
|
OrderStatisticsDistribution |
The asymptotic nondegenerate distributions of the r-th smallest (largest) order statistic.
|
OrnsteinUhlenbeckProcess |
This class represents a univariate Ornstein-Uhlenbeck (OU) process.
|
OrStopConditions |
Combines an arbitrary number of stop conditions, terminating when the first condition is met.
|
OrthogonalPolynomialFamily |
This factory class produces a family of orthogonal polynomials.
|
OUFitting |
This interface defines an estimation procedure to fit a univariate Ornstein-Uhlenbeck process.
|
OUFittingMLE |
This class fits a univariate Ornstein-Uhlenbeck process by using MLE.
|
OUFittingOLS |
This class fits a univariate Ornstein-Uhlenbeck process by using least squares regression.
|
OUProcess |
|
OUSim |
This class simulates a discrete path of a univariate Ornstein-Uhlenbeck (OU) process.
|
OuterProduct |
The outer product of two vectors a and b, is a row vector multiplied on the left by
a column vector.
|
Package |
|
Pair |
An ordered pair (x,y) is a pair of mathematical objects.
|
PairComparatorByAbscissaFirst |
|
PairComparatorByAbscissaOnly |
|
PairingCheck |
|
PairingModel |
Given a set of symbols, their prices and other information, we find mean
reverting pairs for trading.
|
PairingModel1 |
|
PairingModel2 |
|
PairingModel3 |
|
PairingModel4 |
|
PairingModel5 |
|
PairingModelUtils |
|
PanelData<S,T extends Comparable<? super T>> |
A panel data refers to multi-dimensional data frequently involving
measurements over time.
|
PanelData.Transformation |
Transforms the data, e.g., taking log.
|
PanelRegression |
Panel (data) analysis is a statistical method, widely used in social science,
epidemiology, and econometrics, which deals with two-dimensional (cross
sectional/times series) panel data.
|
ParallelDoubleArrayOperation |
This is a multi-threaded implementation of the array math operations.
|
ParallelExecutor |
This class provides a framework for executing an algorithm in parallel.
|
PartialDerivativesByCenteredDifferencing |
This implementation computes the partial derivatives by centered differencing.
|
PartialFunction |
A partial function from X to Y is a function f: X' → Y, where X'
is a subset of X.
|
PattonPolitisWhite2009 |
This class implements the stationary and circular block bootstrapping method
with optimized block length.
|
PattonPolitisWhite2009ForObject<X> |
This class implements the stationary and circular block bootstrapping method
with optimized block length.
|
PattonPolitisWhite2009ForObject.AutoCorrelationForObject |
|
PattonPolitisWhite2009ForObject.AutoCovarianceForObject |
|
PattonPolitisWhite2009ForObject.Type |
|
PCA |
Principal Component Analysis (PCA) is a mathematical procedure that uses an
orthogonal transformation to convert a set of observations of possibly
correlated variables into a set of values of uncorrelated variables called
principal components.
|
PCAbyEigen |
This class performs Principal Component Analysis (PCA) on a data matrix,
using eigen decomposition on the correlation or covariance matrix.
|
PCAbySVD |
This class performs Principal Component Analysis (PCA) on a data matrix,
using the preferred Singular Value Decomposition (SVD) method.
|
PDE |
A partial differential equation (PDE) is a differential equation that contains unknown
multivariable functions and their partial derivatives.
|
PDESolutionGrid2D |
A solution to a bivariate PDE, which is applicable to methods which produce the solution as a
two-dimensional grid.
|
PDESolutionTimeSpaceGrid1D |
A solution to an one-dimensional PDE, which is applicable to methods which
produce the solution
as a grid of time and space.
|
PDESolutionTimeSpaceGrid2D |
A solution to a two-dimensional PDE, which is applicable to methods which produce the solution
as a three-dimensional grid of time and space.
|
PDESolver |
A PDE solver solves a set of PDEs.
|
PDETimeSpaceGrid1D |
This grid numerically solves a 1D PDE, e.g., using the Crank-Nicolson scheme.
|
PeaksOverThreshold |
Peaks Over Threshold (POT) method estimates the parameters for generalized Pareto distribution
(GPD) using maximum likelihood on the observations that are over a given threshold.
|
PeaksOverThresholdOnClusters |
Similar to POT , but only use the peak observations in clusters for the
parametric estimation.
|
PearsonMinimizer |
This is the Pearson method.
|
PenaltyFunction |
A function P: Rn -> R is a penalty function for a constrained optimization problem if it has these properties.
|
PenaltyMethodMinimizer |
The penalty method is an algorithm for solving a constrained minimization problem with general
constraints.
|
PenaltyMethodMinimizer.PenaltyFunctionFactory |
For each constrained optimization problem, the solver creates a new penalty function for it.
|
PermutationMatrix |
A permutation matrix is a square matrix that has exactly one entry '1' in each row and each
column and 0's elsewhere.
|
PerturbationAroundPoint |
The initial population is generated by adding a variance around a given initial.
|
PhysicalConstants |
A collection of fundamental physical constants.
|
Point |
Represent a n-dimensional point.
|
PoissonDistribution |
The Poisson distribution (or Poisson law of small numbers) is a discrete probability distribution
that expresses the probability of a given number of events occurring in a fixed interval of time
and/or space if these events occur with a known average rate and independently of the time since
the last event.
|
PoissonEquation2D |
Poisson's equation is an elliptic PDE that takes the following general form.
|
PoissonMixtureDistribution |
The HMM states use the Poisson distribution to model the observations.
|
PolygonalChain |
A polygonal chain, polygonal curve, polygonal path, or piecewise linear curve, is a connected
series of line segments.
|
PolygonalChainByArray |
|
Polynomial |
A polynomial is a UnivariateRealFunction that represents a finite length expression constructed from variables and constants,
using the operations of addition, subtraction, multiplication, and constant non-negative whole number exponents.
|
PolyRoot |
This is a solver for finding the roots of a polynomial equation.
|
PolyRootSolver |
A root (or a zero) of a polynomial p is a member x in the domain of p such that p(x) vanishes.
|
PortfolioOptimizationAlgorithm |
Computes the optimal weights based only on returns.
|
PortfolioOptimizationAlgorithm.CovarianceEstimator |
Define how the expected covariances of an asset for a future period is
computed.
|
PortfolioOptimizationAlgorithm.MeanEstimator |
Define how the expected mean of an asset for a future period is
computed.
|
PortfolioOptimizationAlgorithm.SampleCovarianceEstimator |
Estimate the expected covariances of an asset using sample covariances.
|
PortfolioOptimizationAlgorithm.SampleMeanEstimator |
Estimate the expected mean of an asset using sample mean.
|
PortfolioOptimizationAlgorithm.SymbolLookup |
Provides a lookup for product symbols and indices.
|
PortfolioRiskExactSigma |
Constructs the constraint coefficient arrays of the portfolio risk term in
the compact form.
|
PortfolioRiskExactSigma.DefaultRoot |
|
PortfolioRiskExactSigma.Diagonalization |
|
PortfolioRiskExactSigma.MatrixRoot |
Specifies the method to compute the root of a matrix.
|
PortfolioUtils |
|
PositiveDefiniteMatrixByPositiveDiagonal |
This class "converts" a matrix into a symmetric, positive definite matrix, if it is not already
so, by forcing the diagonal entries in the eigen decomposition to a small non-negative number,
e.g., 0.
|
PositiveSemiDefiniteMatrixNonNegativeDiagonal |
This class "converts" a matrix into a symmetric, positive semi-definite matrix, if it is not
already so, by forcing the negative diagonal entries in the eigen decomposition to 0.
|
Pow |
This is a square matrix A to the power of an integer n, An.
|
PowellMinimizer |
Powell's algorithm, starting from an initial point, performs a series
of line searches in one iteration.
|
PowerLawSingularity |
This transformation is good for an integral which diverges at one of the end points.
|
PowerLawSingularity.PowerLawSingularityType |
the type of end point divergence
|
PrecisionUtils |
Precision-related utility functions.
|
Preconditioner |
Preconditioning reduces the condition number of the
coefficient matrix of a linear system to accelerate the convergence
when the system is solved by an iterative method.
|
PreconditionerFactory |
This constructs a new instance of Preconditioner for a coefficient matrix.
|
PrimalDualInteriorPointMinimizer |
Solves a Dual Second Order Conic Programming problem using the Primal Dual
Interior Point algorithm.
|
PrimalDualInteriorPointMinimizer1 |
The SOCP dual problem we are solving here is :
\max {\bm b}^T \hat{\bm y} \\
{\rm s.t.} ({\bm A_i^q})^T \hat{\bm y} + {\bm z_i^q} = c_i^q,\ {\bm z_i^q}\in
\mathcal{K}_q^{q_i},\ for i\in [n_q];\\
({\bm A^{\ell}})^T \hat{\bm y} + {\bm z}^{\ell} = c^{\ell},\ {\bm z}^{\ell}
\ge 0;\\
({\bm A^u})^T \hat{\bm y} = c^u;\\
\hat{\bm y} \in \mathbb{R}^m;\ {\bm z}^{\ell}\in \mathbb{R}^{n_{\ell}};\ {\bm
z}^u \in \mathbb{R}^{n_u}.
|
PrimalDualPathFollowingMinimizer |
The Primal-Dual Path-Following algorithm is an interior point method that solves Semi-Definite
Programming problems.
|
PrimalDualSolution |
The vector set {x, s, y} is a solution to both the primal and dual SOCP problems.
|
ProbabilityDistribution |
A univariate probability distribution completely characterizes a random variable by stipulating
the probability of each value of a random variable (when the variable is discrete), or the
probability of the value falling within a particular interval (when the variable is continuous).
|
ProbabilityMassFunction<X> |
A probability mass function (pmf) is a function that gives the probability that a discrete random
variable is exactly equal to some value.
|
ProbabilityMassFunction.Mass<X> |
Stores a possible outcome for a probability distribution and its associated probability.
|
ProbabilityMassQuantile<X> |
As probability mass function is discrete, there are gaps between values in the domain of its cdf,
The quantile function is:
\[
Q(p)\,=\,\inf\left\{ x\in R : p \le F(x) \right\}
\]
|
ProbabilityMassSampler<X> |
A random sampler that is constructed ad-hoc from a list of values and their probabilities.
|
ProductOfWeights |
Defines portfolio diversification as
\[
D(w) = \prod_i w_i
\]
|
Projection |
Project a vector v on another vector w or a set of vectors (basis) {wi}.
|
ProposalFunction |
A proposal function goes from the current state to the next state, where a state is a vector.
|
PseudoInverse |
The Moore-Penrose pseudo-inverse of an m x n matrix A is A+.
|
PureILPProblem |
This is a pure integer linear programming problem, in which all variables are integral.
|
QPbySOCPMinimizer |
We first convert a QP problem to an equivalent SOCP problem and then solve it
using an SOCP solver.
|
QPbySOCPMinimizer1 |
|
QPConstraint |
This interface allows adding constraints to a Quadratic Programming problem
solving w_eff, the efficient frontier or the optimal allocation of
assets.
|
QPDualActiveSetMinimizer |
This implementation solves a Quadratic Programming problem using the dual
active set algorithm.
|
QPException |
This is the exception thrown when there is an error solving a quadratic programming problem.
|
QPInfeasible |
This is the exception thrown by a quadratic programming solver when the quadratic programming problem is infeasible, i.e., no solution.
|
QPMinimizer |
A typedef for QP minimizer.
|
QPMinWeights |
|
QPNoConstraint |
Deprecated.
|
QPNoShortSelling |
|
QPPrimalActiveSetMinimizer |
This implementation solves a Quadratic Programming problem using the Primal
Active Set algorithm.
|
QPProblem |
Quadratic Programming is the problem of optimizing (minimizing) a quadratic function of several variables subject to linear constraints on these variables.
|
QPProblemOnlyEqualityConstraints |
A quadratic programming problem with only equality constraints can be converted into
a equivalent quadratic programming problem without constraints, hence a mere quadratic function.
|
QPSimpleMinimizer |
These are the utility functions to solve simple quadratic programming problems that admit
analytical solutions.
|
QPSolution |
This is a solution to a quadratic programming problem.
|
QPtoSOCPTransformer |
|
QPtoSOCPTransformer1 |
|
QPUnity |
|
QPWeightsLimit |
|
QR |
QR decomposition of a matrix decomposes an m x n matrix A so that A = Q * R.
|
QRAlgorithm |
The QR algorithm is an eigenvalue algorithm by computing the real Schur canonical form of a
matrix.
|
QRDecomposition |
QR decomposition of a matrix decomposes an m x n matrix A so that A = Q * R.
|
QuadraticFunction |
A quadratic function takes this form: \(f(x) = \frac{1}{2} \times x'Hx + x'p + c\).
|
QuadraticMonomial |
A quadratic monomial has this form: x2 + ux + v.
|
QuadraticRoot |
This is a solver for finding the roots of a quadratic equation, \(ax^2 + bx + c = 0\).
|
QuadraticSyntheticDivision |
Divide a polynomial P(x) by a quadratic monomial (x2 + ux + v)
to give the quotient Q(x) and the remainder (b * (x + u) + a).
|
Quantile |
Quantiles are points taken at regular intervals from the cumulative
distribution function (CDF) of a random variable.
|
Quantile.QuantileType |
the available quantile definitions
|
QuarticRoot |
This is a quartic equation solver that solves \(ax^4 + bx^3 + cx^2 + dx + e = 0\).
|
QuarticRoot.QuarticSolver |
This defines a quartic equation solver.
|
QuarticRootFerrari |
This is a quartic equation solver that solves \(ax^4 + bx^3 + cx^2 + dx + e = 0\) using the Ferrari method.
|
QuarticRootFormula |
This is a quartic equation solver that solves \(ax^4 + bx^3 + cx^2 + dx + e = 0\) using a root-finding formula.
|
QuasiBinomial |
This is the quasi Binomial distribution in GLM.
|
QuasiDistribution |
This interface represents the quasi-distribution used in GLM.
|
QuasiFamily |
This interface represents the quasi-family used in GLM.
|
QuasiGamma |
This is the quasi Gamma distribution in GLM.
|
QuasiGaussian |
This is the quasi Gaussian distribution in GLM.
|
QuasiGLMBeta |
This is the estimate of beta, β^, in a quasi Generalized Linear Model,
i.e., a GLM with a quasi-family of distributions.
|
QuasiGLMNewtonRaphson |
The Newton-Raphson method is an iterative algorithm to estimate the β of the quasi
GLM regression.
|
QuasiGLMProblem |
This class represents a quasi generalized linear regression problem.
|
QuasiGLMResiduals |
Residual analysis of the results of a quasi Generalized Linear Model
regression.
|
QuasiInverseGaussian |
This is the quasi Inverse-Gaussian distribution in GLM.
|
QuasiMinimalResidualSolver |
The Quasi-Minimal Residual method (QMR) is useful for solving a non-symmetric n-by-n linear
system.
|
QuasiNewtonMinimizer |
The Quasi-Newton methods in optimization are for finding local maxima and minima of functions.
|
QuasiPoisson |
This is the quasi Poisson distribution in GLM.
|
QuEST |
QuEST is a function that generates sample eigenvalues from population
eigenvalues.
|
QuEST.Result |
|
R1Projection |
|
R1toConstantMatrix |
A constant matrix function maps a real number to a constant matrix: \(R^n \rightarrow A\).
|
R1toMatrix |
This is a function that maps from R1 to a Matrix space.
|
R2toMatrix |
This is a function that maps from R2 to a Matrix space.
|
RamerDouglasPeucker |
The Ramer-Douglas-Peucker algorithm simplifies a PolygonalChain by removing vertices
which do not affect the shape of the curve to a given tolerance.
|
Rand1Bin |
The Rand-1-Bin rule is defined by:
mutation by adding a scaled, randomly sampled vector difference to a third vector
(differential mutation);
crossover by performing a uniform crossover (discrete recombination).
|
RandomBetaGenerator |
This is a random number generator that generates random deviates according to the Beta distribution.
|
RandomExpGenerator |
This is a random number generator that generates random deviates according to the exponential distribution.
|
RandomGammaGenerator |
This is a random number generator that generates random deviates according to the Gamma distribution.
|
RandomLongGenerator |
A (pseudo) random number generator that generates a sequence of long s that lack any pattern and are uniformly distributed.
|
RandomNumberGenerator |
A (pseudo) random number generator is an algorithm designed to generate a sequence of numbers that lack any pattern.
|
RandomProcess |
This interface represents a univariate random process a.k.a.
|
RandomRealizationGenerator |
This interface defines a generator to construct random realizations from a univariate stochastic process.
|
RandomRealizationOfRandomProcess |
This class generates random realizations from a random/stochastic process.
|
RandomStandardNormalGenerator |
This is a random number generator that generates random deviates according to the standard Normal
distribution.
|
RandomVectorGenerator |
A (pseudo) multivariate random number generator samples a random vector from
a multivariate distribution.
|
RandomWalk |
This is the Random Walk construction of a stochastic process per SDE
specification.
|
Rank |
Rank is a relationship between a set of items such that, for any two items,
the first is either "ranked higher than", "ranked lower than" or "ranked
equal to" the second.
|
Rank.TiesMethod |
The method for assigning ranks when some values are equal (called
'ties').
|
RankOneMinimizer |
The Rank One method is a quasi-Newton method to solve unconstrained nonlinear
optimization problems.
|
Rastrigin |
The Rastrigin function is a non-convex function used as a performance test problem for
optimization algorithms.
|
RayleighDistribution |
The L2 norm of (x1, x2), where xi's are normal, uncorrelated, equal variance and
have the Rayleigh distributions.
|
RayleighRNG |
This random number generator samples from the Rayleigh distribution using the
inverse transform sampling method.
|
Real |
A real number is an arbitrary precision number.
|
RealInterval |
This is an interval on the real line.
|
Realization |
This is a univariate time series indexed real numbers.
|
Realization.Entry |
This is the TimeSeries.Entry for a real number -indexed univariate time series.
|
RealMatrix |
|
RealScalarFunction |
A real valued function a \(R^n \rightarrow R\) function, \(y = f(x_1, ..., x_n)\).
|
RealScalarFunctionChromosome |
This chromosome encodes a real valued function.
|
RealScalarSubFunction |
|
RealVectorFunction |
A vector-valued function a \(R^n \rightarrow R^m\) function, \([y_1,...,y_m] = f(x_1,...,x_n)\).
|
RealVectorSpace |
A vector space is a set of vectors that are closed under some operations.
|
RealVectorSubFunction |
|
RecursiveGridInterpolation |
This algorithm works by recursively calling lower order interpolation (hence
the cost is exponential), until the given univariate algorithm can be used
when the remaining dimension becomes one.
|
Reference<T> |
|
RelativeTolerance |
The stopping criteria is that the norm of the residual r relative to
the input base is equal to or smaller than the specified
tolerance , that is,
||r||2/base ≤ tolerance
|
RepeatedCoordinatesException |
|
Resampler |
This is the interface of a re-sampler method.
|
ResamplerModel |
|
ReturnLevel |
Given a GEV distribution of a random variable \(X\), the return level \(\eta\) is the value that
is expected to be exceeded on average once every interval of time \(T\), with a probability of
\(1 / T\).
|
ReturnPeriod |
The return period \(R\) of a level \(\eta\) for a random variable \(X\) is the mean number of
trials that must be done for \(X\) to exceed \(\eta\).
|
Returns |
Contains utility methods related to returns computation.
|
ReturnsCalculator |
This interface defines how return is computed from two values of a portfolio.
|
ReturnsCalculators |
Various ways of return calculations.
|
ReturnsMatrix |
|
ReturnsMoments |
Contains the estimated moments of asset returns.
|
ReturnsMoments.Estimator |
The interface to estimate moments from returns.
|
ReturnsResamplerFactory |
This is a factory interface to construct new instances of multivariate
resamplers.
|
ReversedWeibullDistribution |
The Reversed Weibull distribution is a special case (Type III) of the generalized extreme value
distribution, with \(\xi<0\).
|
Ridders |
Ridders' method computes the numerical derivative of a function.
|
Riemann |
This is a wrapper class that integrates a function by using an appropriate integrator together with Romberg's method.
|
Ring<R> |
A ring is a set R equipped with two binary operations called addition and multiplication:
+ : R × R → R and
⋅ : R × R → R
To qualify as a ring, the set and two operations, (R, +, ⋅), must satisfy the requirements known as the ring axioms.
|
RNGUtils |
Provides static methods that wraps random number generators to produce synchronized generators.
|
RntoMatrix |
This interface is a function that maps from Rn to a Matrix space.
|
RobustAdaptiveMetropolis |
A variation of Metropolis, that uses the estimated covariance of the target
distribution in the proposal distribution, based on a paper by Vihola (2011).
|
RobustCointegration |
This class runs the robust cointegration algorithm on a pair of prices to
determine if their cointegration relationship is stable enough to trade.
|
Romberg |
Romberg's method computes an integral by generating a sequence of estimations of the integral value and then doing an extrapolation.
|
RootedTree<V,E extends Arc<V>> |
A rooted tree is a directed graph, and has a root to measure distance from the
root.
|
RungeKutta |
The Runge-Kutta methods are an important family of implicit and explicit iterative methods for
the approximation of solutions of ordinary differential equations.
|
RungeKutta1 |
This is the first-order Runge-Kutta formula, which is the same as the Euler method.
|
RungeKutta10 |
This is the tenth-order Runge-Kutta formula.
|
RungeKutta2 |
This is the second-order Runge-Kutta formula, which can be implemented efficiently with a
three-step algorithm.
|
RungeKutta3 |
This is the third-order Runge-Kutta formula.
|
RungeKutta4 |
This is the fourth-order Runge-Kutta formula.
|
RungeKutta5 |
This is the fifth-order Runge-Kutta formula.
|
RungeKutta6 |
This is the sixth-order Runge-Kutta formula.
|
RungeKutta7 |
This is the seventh-order Runge-Kutta formula.
|
RungeKutta8 |
This is the eighth-order Runge-Kutta formula.
|
RungeKuttaFehlberg |
The Runge-Kutta-Fehlberg method is a version of the classic Runge-Kutta method, which
additionally uses step-size control and hence allows specification of a local truncation error
bound.
|
RungeKuttaIntegrator |
This integrator works with a single-step stepper which estimates the solution for the next step
given the solution of the current step.
|
RungeKuttaStepper |
|
SampleAutoCorrelation |
This is the sample Auto-Correlation Function (ACF) for a univariate data set.
|
SampleAutoCovariance |
This is the sample Auto-Covariance Function (ACVF) for a univariate data set.
|
SampleAutoCovariance.Type |
the available auto-covariance types
|
SampleCovariance |
This class computes the Covariance matrix of a matrix, where the (i, j) entry is the
covariance of the i-th column and j-th column of the matrix.
|
SamplePartialAutoCorrelation |
This is the sample partial Auto-Correlation Function (PACF) for a univariate data set.
|
ScaledPolynomial |
This constructs a scaled polynomial that has neither too big or too small coefficients,
hence avoiding overflow or underflow.
|
ScientificNotation |
Scientific notation expresses a number in this form
x = a * 10b
a is called the significand or mantissa, and 1 ≤ |a| < 10.
|
SDE |
This class represents a univariate, continuous-time Stochastic Differential Equation (SDE) of
the following form.
|
SDPDualProblem |
A dual SDP problem, as in equation 14.4 in the reference, takes the following form.
|
SDPDualProblem.EqualityConstraints |
This is the collection of equality constraints:
\[
\sum_{i=1}^{p}y_i\mathbf{A_i}+\textbf{S} = \textbf{C}, \textbf{S} \succeq \textbf{0}
\]
|
SDPPrimalProblem |
A Primal SDP problem, as in equation 14.1 in the reference, takes the
following form.
|
SDPT3v4 |
This implements Algorithm_IPC, the SOCP interior point algorithm in SDPT3
version 4.
|
SDPT3v4_1a |
This implements Algorithm_IPC, the SOCP interior point algorithm in SDPT3
version 4.
|
SDPT3v4_1b |
This implements Algorithm_IPC, the SOCP interior point algorithm in SDPT3
version 4.
|
Seedable |
A seed-able experiment allow the same experiment to be repeated in exactly the same way.
|
SelectionByAIC |
In each step, a factor is added if the resulting model has the highest AIC, until no factor
addition can result in a model with AIC higher than the current AIC.
|
SelectionByZValue |
In each step, the most significant factor is added, until all remaining factors are
insignificant.
|
SemiImplicitExtrapolation |
|
Sequence |
A sequence is an ordered list of (real) numbers.
|
ShapiroWilk |
The Shapiro-Wilk test tests the null hypothesis that a sample comes from a normally distributed population.
|
ShapiroWilkDistribution |
Shapiro-Wilk distribution is the distribution of the Shapiro-Wilk statistics,
which tests the null hypothesis that a sample comes from a normally distributed population.
|
ShortestPath<V> |
In graph theory, a shortest path algorithm finds a path between two vertices in a graph such that
the sum of the weights of its constituent edges is minimized.
|
SHR0 |
SHR0 is a simple uniform random number generator.
|
SHR3 |
SHR3 is a 3-shift-register generator with period 2^32-1.
|
SiegelTukey |
The Siegel-Tukey test tests for differences in scale (variability) between two groups.
|
SimilarMatrix |
Given a matrix A and an invertible matrix P, we construct the similar matrix
B s.t.,
B = P-1AP
|
SimpleAnnealingFunction |
This annealing function takes a random step in a uniform direction, where the step size depends
only on the temperature.
|
SimpleAR1Fit |
This class does a quick AR(1) fitting to the time series, essentially
treating the returns as independent.
|
SimpleAR1Moments |
|
SimpleArc<V> |
A simple arc has two vertices: head and tail.
|
SimpleCellFactory |
|
SimpleDoubleArrayOperation |
This is a simple, single-threaded implementation of the array math operations.
|
SimpleEdge<V> |
A simple edge has two vertices.
|
SimpleGARCHFit |
This class does a quick GARCH(1,1) fitting to the time series, essentially
treating the returns as independent.
|
SimpleGARCHMoments1 |
Estimates the moments by GARCH model.
|
SimpleGARCHMoments2 |
Estimates the moments by GARCH model.
|
SimpleGridMinimizer |
This minimizer is a simple global optimization method.
|
SimpleGridMinimizer.NewCellFactoryCtor |
This factory constructs a new SimpleCellFactory for each minimization problem.
|
SimpleMatrixMathOperation |
This is a generic, single-threaded implementation of matrix math operations.
|
SimpleMC |
This is a time-homogeneous Markov chain with a finite state space.
|
SimpleTemperatureFunction |
Abstract class for the common case where \(T^V_t = T^A_t\).
|
SimpleTimeSeries |
This simple univariate time series simply wraps a double[] to form a time series.
|
SimplexCuttingPlaneMinimizer |
The use of cutting planes to solve Mixed Integer Linear Programming (MILP) problems was introduced by Ralph E Gomory.
|
SimplexCuttingPlaneMinimizer.CutterFactory |
This factory constructs a new Cutter for each MILP problem.
|
SimplexCuttingPlaneMinimizer.CutterFactory.Cutter |
A Cutter defines how to cut a simplex table, i.e., how to relax a linear program so that
the current non-integer solution is no longer feasible to the relaxation.
|
SimplexPivoting |
A simplex pivoting finds a row and column to exchange to reduce the cost function.
|
SimplexPivoting.Pivot |
the pivot
|
SimplexTable |
This is a simplex table used to solve a linear programming problem using a simplex method.
|
SimplexTable.Label |
|
SimplexTable.LabelType |
|
Simpson |
Simpson's rule can be thought of as a special case of Romberg's method.
|
SimulatedAnnealingMinimizer |
Simulated Annealing is a global optimization meta-heuristic that is inspired by annealing in
metallurgy.
|
SingularValueByDQDS |
Computes all the singular values of a bidiagonal matrix.
|
Skewness |
Skewness is a measure of the asymmetry of the probability distribution.
|
SmallestSubscriptRule |
Bland's smallest-subscript rule is for anti-cycling in choosing a pivot.
|
SOCPConstraints |
Marker interface for SOCP constraints.
|
SOCPDualProblem |
This is the Dual Second Order Conic Programming problem.
|
SOCPDualProblem.EqualityConstraints |
|
SOCPDualProblem1 |
This is the Dual Second Order Conic Programming problem.
|
SOCPDualProblem1.EqualityConstraints |
|
SOCPGeneralConstraint |
This represents the SOCP general constraint of this form.
|
SOCPGeneralConstraints |
This represents a set of SOCP general constraints of this form.
|
SOCPGeneralProblem |
Many convex programming problems can be represented in the following form.
|
SOCPGeneralProblem1 |
Many convex programming problems can be represented in the following form.
|
SOCPLinearBlackList |
A black list means that the positions of some assets must be zero.
|
SOCPLinearEqualities |
|
SOCPLinearEquality |
Linear equality for SOCP problem.
|
SOCPLinearInequalities |
|
SOCPLinearInequality |
Linear inequality for SOCP problem.
|
SOCPLinearMaximumLoan |
A maximum loan constraint.
|
SOCPLinearSectorExposure |
A sector exposure constraint.
|
SOCPLinearSectorNeutrality |
A sector neutrality means that the sum of weights for given sectors are zero.
|
SOCPLinearSelfFinancing |
A self financing constraint.
|
SOCPLinearZeroValue |
A zero value constraint.
|
SOCPMaximumLoan |
Transforms a maximum loan constraint into the compact SOCP form.
|
SOCPNoTradingList1 |
Transforms a black list (not to trade a new position) constraint into the
compact SOCP form.
|
SOCPNoTradingList2 |
Transforms a black list (not to trade a new position) constraint into the
compact SOCP form.
|
SOCPPortfolioConstraint |
An SOCP constraint for portfolio optimization, e.g., market impact, is
represented by a set of constraints in this form:
\[
||A^{T}x+c||_{2}\leq b^{T}x+d
\]
or this form:
/[
A^T x = c, x \in \Re^m
/]
or this form:
/[
A^T x \leq c, x \in \Re^m
/]
|
SOCPPortfolioConstraint.ConstraintViolationException |
Exception thrown when a constraint is violated.
|
SOCPPortfolioConstraint.Variable |
the variables involved in SOCPGeneralConstraints
|
SOCPPortfolioObjectiveFunction |
Constructs the objective function for portfolio optimization.
|
SOCPPortfolioProblem |
Constructs an SOCP problem for portfolio optimization.
|
SOCPPortfolioProblem1 |
Constructs an SOCP problem for portfolio optimization.
|
SOCPRiskConstraint |
|
SOCPSectorExposure |
Transforms a sector exposure constraint into the compact SOCP form.
|
SOCPSectorNeutrality |
Transforms a sector neutral constraint into the compact SOCP form.
|
SOCPSelfFinancing |
Transforms a self financing constraint into the compact SOCP form.
|
SOCPZeroValue |
Transforms a zero value constraint into the compact SOCP form.
|
SORSweep |
This is a building block for
SOR and
SSOR
to perform the forward or backward sweep.
|
SortableArray |
These arrays can be sorted according to the dictionary order.
|
SortedOrderedPairs |
The ordered pairs are first sorted by abscissa, then by ordinate.
|
SparseDAGraph<V,E extends Arc<V>> |
This class implements the sparse directed acyclic graph representation.
|
SparseDiGraph<V,E extends Arc<V>> |
This class implements the sparse directed graph representation.
|
SparseGraph<V,E extends HyperEdge<V>> |
This class implements the sparse graph representation.
|
SparseMatrix |
A sparse matrix stores only non-zero values.
|
SparseMatrix.Entry |
This is a (non-zero) entry in a sparse matrix.
|
SparseMatrix.ValueArray |
|
SparseMatrixUtils |
|
SparseStructure |
This interface defines common operations on sparse structures such as sparse
vector or sparse matrix.
|
SparseTree<V> |
This class implements the sparse tree representation.
|
SparseUnDiGraph<V,E extends UndirectedEdge<V>> |
This class implements the sparse undirected graph representation.
|
SparseVector |
A sparse vector stores only non-zero values.
|
SparseVector.Entry |
|
SparseVector.Iterator |
This wrapper class overrides the Iterator.remove()
method to throw an exception when called.
|
SpearmanRankCorrelation |
Spearman's rank correlation coefficient or Spearman's rho is a non-parametric measure of
statistical dependence between two variables.
|
Spectrum |
A spectrum is the set of eigenvalues of a matrix.
|
SQPActiveSetMinimizer |
Sequential quadratic programming (SQP) is an iterative method for nonlinear
optimization.
|
SQPActiveSetMinimizer.VariationFactory |
This factory constructs a new instance of SQPASVariation for each
SQP problem.
|
SQPActiveSetOnlyEqualityConstraint1Minimizer |
This implementation is a modified version of Algorithm 15.1 in the reference to solve a general constrained optimization problem with only equality constraints.
|
SQPActiveSetOnlyEqualityConstraint1Minimizer.VariationFactory |
This factory constructs a new instance of SQPASEVariation for each SQP problem.
|
SQPActiveSetOnlyEqualityConstraint2Minimizer |
|
SQPActiveSetOnlyInequalityConstraintMinimizer |
This implementation is a modified version of Algorithm 15.2 in the reference
to solve a general constrained optimization problem with only inequality
constraints.
|
SQPASEVariation |
This interface allows customization of certain operations in the Active Set algorithm to solve a general constrained minimization problem with only equality constraints
using Sequential Quadratic Programming.
|
SQPASEVariation1 |
This implementation is a modified version of the algorithm in the reference to solve a general constrained minimization problem
using Sequential Quadratic Programming.
|
SQPASEVariation2 |
This implementation tries to find an exact positive definite Hessian whenever possible.
|
SQPASVariation |
This interface allows customization of certain operations in the Active Set algorithm to solve a general constrained minimization problem
using Sequential Quadratic Programming.
|
SQPASVariation1 |
This implementation is a modified version of Algorithm 15.4 in the reference
to solve a general constrained minimization problem using Sequential
Quadratic Programming.
|
SSORPreconditioner |
SSOR preconditioner is derived from a symmetric coefficient matrix A
which is decomposed as
A = D + L + Lt
The SSOR preconditioning matrix is defined as
M = (D + L)D-1(D + L)t
or, parameterized by ω
M(ω) = (1/(2 - ω))(D / ω + L)(D / ω)-1(D / ω + L)t
|
StandardCumulativeNormal |
The cumulative Normal distribution function describes the probability of a Normal random variable falling in the interval \((-\infty, x]\).
|
StandardInterval |
This transformation is for mapping integral region from [a, b] to [-1, 1].
|
StandardNormalRNG |
An alias for Zignor2005 to provide a default implementation for sampling from the
standard Normal distribution.
|
StateEquation |
This is the state equation in a controlled dynamic linear model.
|
Statistic |
A statistic (singular) is a single measure of some attribute of a sample (e.g., its arithmetic mean value).
|
StatisticFactory |
|
SteepestDescentMinimizer |
A steepest descent algorithm finds the minimum by moving along the negative of the steepest
gradient direction.
|
SteepestDescentSolver |
The Steepest Descent method (SDM) solves a symmetric n-by-n linear system.
|
StepFunction |
A step function (or staircase function) is a finite linear combination of indicator functions of
intervals.
|
StopCondition |
Defines when an algorithm stops (the iterations).
|
StringUtils |
Utility methods for string manipulation.
|
SturmCount |
Computes the Sturm count, the number of negative pivots encountered while factoring tridiagonal
T - σ I = LDLT.
|
SubFunction<R> |
A sub-function, g, is defined over a subset of the domain of another
(original) function,
f.
|
SubMatrixBlock |
Sub-matrix block representation for block algorithm.
|
SubMatrixRef |
This is a 'reference' to a sub-matrix of a larger matrix without copying it.
|
SubProblemMinimizer |
This minimizer solves a constrained optimization sub-problem where the values
for some variables are held fixed for the original optimization problem.
|
SubProblemMinimizer.ConstrainedMinimizerFactory<U extends ConstrainedMinimizer<ConstrainedOptimProblem,IterativeSolution<Vector>>> |
This factory constructs a new instance of
ConstrainedMinimizer to solve a real valued minimization
problem.
|
SubProblemMinimizer.IterativeSolution<Vector> |
|
SubstitutionRule |
A substitution rule specifies \(x(t)\) and \(\frac{\mathrm{d} x}{\mathrm{d} t}\).
|
SubVectorRef |
Represents a sub-vector backed by the referenced vector, without data
copying.
|
SuccessiveOverrelaxationSolver |
The Successive Overrelaxation method (SOR), is devised by applying
extrapolation to the Gauss-Seidel method.
|
Summation |
Summation is the operation of adding a sequence of numbers; the result is their sum or total.
|
Summation.Term |
Define the terms in a summation series.
|
SumOfPenalties |
This penalty function sums up the costs from a set of constituent penalty functions.
|
SumOfPoweredWeights |
Defines portfolio diversification as
\[
D(w) = \sum_i w_i^P
\]
|
SumOfSquaredWeights |
Defines portfolio diversification as
\[
D(w) = \sum_i w_i^2
\]
|
SumOfWLogW |
Defines portfolio diversification as
\[
D(w) = \sum_i w_i \ln(w_i)
\]
|
SVD |
SVD decomposition decomposes a matrix A of dimension m x n, where m >= n,
such that
U' * A * V = D, or U * D * V' = A.
|
SVD.Method |
|
SVDbyMR3 |
Given a matrix A, computes its singular value decomposition (SVD), using
"Algorithm of Multiple Relatively Robust Representations" (MRRR).
|
SVDDecomposition |
SVD decomposition decomposes a matrix A of dimension m x n, where m >= n, such that
U' * A * V = D, or U * D * V' = A.
|
SVEC |
SVEC converts a symmetric matrix K = {Kij} into a vector of dimension n(n+1)/2.
|
SymmetricEigenByMR3 |
Computes eigen decomposition for a symmetric matrix using "Algorithm of Multiple Relatively
Robust Representations" (MRRR).
|
SymmetricEigenFor2x2Matrix |
Computes the eigen decomposition of a 2-by-2 symmetric matrix in the following form by symmetric
QR algorithm.
|
SymmetricKronecker |
Compute the symmetric Kronecker product of two matrices.
|
SymmetricMatrix |
A symmetric matrix is a square matrix such that its transpose equals to itself, i.e.,
A[i][j] = A[j][i]
|
SymmetricQRAlgorithm |
The symmetric QR algorithm is an eigenvalue algorithm by computing the real Schur canonical form
of a square, symmetric matrix.
|
SymmetricSuccessiveOverrelaxationSolver |
The Symmetric Successive Overrelaxation method (SSOR) is like
SOR, but it performs in each
iteration one forward sweep followed by one backward sweep.
|
SymmetricSVD |
This algorithm calculates the Singular Value Decomposition (SVD) of a square, symmetric
matrix A using QR algorithm.
|
SymmetricTridiagonalDecomposition |
Given a square, symmetric matrix A, we find Q
such that Q' * A * Q = T , where T is a tridiagonal matrix.
|
SynchronizedStatistic |
This is a thread-safe wrapper of Statistic by synchronizing all public methods
so that only one thread at a time can access the instance.
|
T |
Student's t-test tests for the equality of means,
for the one-sample case, against a hypothetical mean,
and for two-sample case, of two populations.
|
Table |
A table is a means of arranging data in rows and columns.
|
TauEstimator |
Non-linear shrinkage estimator of population eigenvalues.
|
TDistribution |
The Student t distribution is the probability distribution of t, where
\[
t = \frac{\bar{x} - \mu}{s / \sqrt N}
\]
\(\bar{x}\) is the sample mean;
μ is the population mean;
s is the square root of the sample variance;
N is the sample size;
The importance of the Student's distribution is
when (as in nearly all practical statistical work) the population standard deviation is unknown and has to be estimated from the data.
|
TemperatureFunction |
A temperature function defines a temperature schedule used in simulated annealing.
|
TemperedAcceptanceProbabilityFunction |
A tempered acceptance probability function computes the probability that the next state
transition will be accepted.
|
ThinRNG |
Thinning is a scheme that returns every m-th item, discarding the last m-1 items for each draw.
|
ThinRVG |
Thinning is a scheme that returns every m-th item, discarding the last m-1 items for each draw.
|
ThomasAlgorithm |
Thomas algorithm is an efficient algorithm to solve a linear tridiagonal matrix equation.
|
ThreadIDRLG |
This uniform number generator generates independent sequences of random numbers per thread, hence
thread-safe.
|
ThreadIDRNG |
This random number generator generates independent sequences of random numbers per thread, hence
thread-safe.
|
Ties<T> |
Count the number of occurrences of each distinctive value.
|
TimeGrid |
Specify the time points in a grid or axis.
|
TimeInterval |
This is a time interval.
|
TimeIntervals |
|
TimeSeries<T extends Comparable<? super T>,V,E extends TimeSeries.Entry<T,V>> |
A time series is a serially indexed collection of items.
|
TimeSeries.Entry<T,V> |
A time series is composed of a sequence of Entry s.
|
Tolerance |
The tolerance criteria for an iterative algorithm to stop.
|
TopNOptimizationAlgorithm |
|
TradingPair |
|
Trapezoidal |
The Trapezoidal rule is a closed type Newton-Cotes formula, where the integral interval is evenly divided into N sub-intervals.
|
TraversalFromRoots<V> |
A graph traversal is the problem of visiting all the nodes in a graph in a particular manner.
|
Tree<V,E extends HyperEdge<V>> |
A tree is an undirected graph in which any two vertices are connected by exactly one simple path.
|
TrendType |
These are the three versions of the Augmented Dickey-Fuller (ADF) test.
|
TriangularDistribution |
The triangular distribution is a continuous probability distribution with lower limit a, upper
limit b and mode c, where a < b and a ≤ c ≤ b.
|
TridiagonalDeflationSearch |
This class locates deflation in a tridiagonal matrix.
|
TriDiagonalization |
A tri-diagonal matrix A is a matrix such that
it has non-zero elements only in the main diagonal, the first diagonal below, and the first
diagonal above.
|
TridiagonalMatrix |
A tri-diagonal matrix has non-zero entries only on the super, main and sub diagonals.
|
Trigamma |
The trigamma function is defined as the logarithmic derivative of the digamma function.
|
TrigMath |
A collection of trigonometric functions complementary to those in Java's Math class.
|
Triple |
A triple is a tuple of length three.
|
TrivariateRealFunction |
A trivariate real function takes three real arguments and outputs one real value.
|
TruncatedNormalDistribution |
The truncated Normal distribution is the probability distribution of a normally distributed
random variable whose value is either bounded below or above (or both).
|
TurningPoint |
Represents a turning point on a Markowitz critical line.
|
Twiddle<T> |
Generates all combinations of M elements drawn without replacement from a set of N
elements.
|
UnconstrainedLASSObyCoordinateDescent |
This class solves the unconstrained form of LASSO, that is,
\[
\min_w \left \{ \left \| Xw - y \right \|_2^2 + \lambda * \left \| w
\right \|_1 \right \}
\]
by Coordinate Descent method.
|
UnconstrainedLASSObyQP |
This class solves the unconstrained form of LASSO
(i.e.
|
UnconstrainedLASSOProblem |
A LASSO (least absolute shrinkage and selection operator) problem focuses on solving an RSS
(residual sum of squared errors) problem with L1 regularization.
|
UnDiGraph<V,E extends UndirectedEdge<V>> |
An undirected graph is a graph, or set of nodes connected by edges, where an edge does not
differentiate between (a, b) or (b, a).
|
UndirectedEdge<V> |
A tagging interface for implementations of an undirected graph
that accept only undirected edges.
|
UniformDistributionOverBox |
This random vector generator uniformly samples points over a box region.
|
UniformDistributionOverBox1 |
This algorithm, by sampling uniformly in each dimension,
generates a set of initials uniformly distributed over a box region,
with some degree of irregularity or randomness.
|
UniformDistributionOverBox2 |
This algorithm, by perturbing each grid point by a small random scale,
generates a set of initials uniformly distributed over a box region,
with some degree of irregularity or randomness.
|
UniformMeshOverRegion |
The initial population is generated by putting a uniform mesh/grid/net over the entire region.
|
UniformRNG |
A pseudo uniform random number generator samples numbers from the unit interval, [0, 1],
in such a way that there are equal probabilities of them falling in any same length sub-interval.
|
UniformRNG.Method |
the pseudo uniform random number generators available
|
Uniroot |
A root-finding algorithm is a numerical algorithm for finding a value
x such that f(x) = 0, for a given function f.
|
UnitGrid |
This is the sequence of time points [0, 1, ..., T].
|
UnivariateEVD |
Distribution of extreme values (e.g., maxima, minima, or other order statistics).
|
UnivariateMinimizer |
A univariate minimizer minimizes a univariate function.
|
UnivariateMinimizer.Solution |
This is the solution to a univariate minimization problem.
|
UnivariateRealFunction |
A univariate real function takes one real argument and outputs one real value.
|
UnivariateTimeSeries<T extends Comparable<? super T>,E extends UnivariateTimeSeries.Entry<T>> |
This is a univariate time series indexed by some notion of time.
|
UnivariateTimeSeries.Entry<T> |
This is the TimeSeries.Entry for a univariate time series.
|
UnivariateTimeSeriesUtils |
|
UnsatisfiableErrorCriterionException |
An exception that is thrown when the error criterion cannot be met.
|
UpperBoundConstraints |
This is an upper bound constraints such that for all xi's,
xi ≤ b
|
UpperTriangularMatrix |
An upper triangular matrix has 0 entries where row index is greater than column index.
|
VanDerWaerden |
The Van der Waerden test tests for the equality of all population distribution functions.
|
VanDerWaerden1969 |
Deprecated.
|
VARFit |
This class construct a VAR model by estimating the coefficients using OLS regression.
|
Variance |
The variance of a sample is the average squared deviations from the sample mean.
|
VariancebtX |
Computes \(b'Xb\).
|
VARIMAModel |
An ARIMA(p, d, q) process, Yt, is such that
\[
X_t = (1 - L)^d Y_t
\]
where
L is the lag operator, d the order of difference,
Xt an ARMA(p, q) process, for which
\[
X_t = \mu + \Sigma \phi_i X_{t-i} + \Sigma \theta_j \epsilon_{t-j} + \epsilon_t,
\]
Xt, μ and εt are n-dimensional
vectors.
|
VARIMASim |
This class simulates a multivariate ARIMA process.
|
VARIMAXModel |
The ARIMAX model (ARIMA model with eXogenous inputs) is a generalization of the ARIMA model by
incorporating exogenous variables.
|
VARLinearRepresentation |
The linear representation of an Autoregressive Moving Average (ARMA) model is a (truncated)
infinite sum of AR terms.
|
VARMAAutoCorrelation |
Compute the Auto-Correlation Function (ACF) for a vector AutoRegressive Moving Average (ARMA) model, assuming that
EXt = 0.
|
VARMAAutoCovariance |
Compute the Auto-CoVariance Function (ACVF) for a vector AutoRegressive Moving Average (ARMA) model, assuming that
EXt = 0.
|
VARMAForecastOneStep |
This is an implementation, adapted for an ARMA process, of the innovation algorithm,
which is an efficient way of obtaining a one step least square linear predictor.
|
VARMAModel |
A multivariate ARMA model, Xt, takes this form.
|
VARMAXModel |
The VARMAX model (ARMA model with eXogenous inputs) is a generalization of the ARMA model by
incorporating exogenous variables.
|
VARModel |
This class represents a VAR model.
|
VARXModel |
A VARX (Vector AutoRegressive model with eXogeneous inputs) model, Xt, takes
this form.
|
VECM |
A Vector Error Correction Model (VECM(p)) has one of the following specifications:
|
VECMLongrun |
The long-run Vector Error Correction Model (VECM(p)) takes this form.
|
VECMTransitory |
A transitory Vector Error Correction Model (VECM(p)) takes this form.
|
Vector |
An Euclidean vector is a geometric object that has both a magnitude/length and a direction.
|
VectorAccessException |
This is the exception thrown when any invalid access to a Vector instance is
detected, e.g., out-of-range index.
|
VectorFactory |
These are the utility functions that create new instances of vectors from existing ones.
|
VectorMathOperation |
This is a generic implementation of the math operations of double based Vector .
|
VectorMonitor |
|
VectorSizeMismatch |
This is the exception thrown when an operation is performed on two vectors with different
sizes.
|
VectorSpace<V,F extends Field<F>> |
A vector space is a set V together with two binary operations that combine two entities to yield a third,
called vector addition and scalar multiplication.
|
VertexTree<T> |
A VertexTree is both a tree and a vertex/node.This implementation builds a tree
incrementally and recursively (combining trees).
|
Viterbi |
The Viterbi algorithm is a dynamic programming algorithm for finding the most
likely sequence of hidden states - called the Viterbi path - that results in
a sequence of observed events, especially in the context of Markov
information sources and hidden Markov models.
|
VMAInvertibility |
The inverse representation of an Autoregressive Moving Average (ARMA) model is a (truncated) infinite sum of the Moving Averages.
|
VMAModel |
This class represents a multivariate MA model.
|
WaveEquation1D |
A one-dimensional wave equation is a hyperbolic PDE that takes the following form.
|
WaveEquation2D |
A two-dimensional wave equation is a hyperbolic PDE that takes the following form.
|
WeibullDistribution |
The Weibull distribution interpolates between the exponential distribution k = 1 and the
Rayleigh distribution (k = 2),
where k is the shape parameter.
|
WeibullRNG |
This random number generator samples from the Weibull distribution using the
inverse transform sampling method.
|
WeightedArc<V> |
A weighted arc is an arc that has a weight or a cost associated with it.
|
WeightedEdge<V> |
A weighted edge has a weight or a cost associated with it.
|
WeightedMean |
The weighted mean is defined as
\[
\bar{x} = \frac{ \sum_{i=1}^N w_i x_i}{\sum_{i=1}^N w_i}
\]
|
WeightedMedian |
A weighted median of a sample is the 50% weighted percentile.It was first
proposed by F.
|
WeightedRSS |
Weighted sum of squared residuals (RSS) for a given function \(f(.)\) and observations
\((x_i,y_i)\).
|
WeightedVariance |
The weighted sample variance is defined as follows.
|
White |
The White test tests for conditional heteroskedasticity.
|
WilcoxonRankSum |
The Wilcoxon rank sum test tests for the equality of means of two populations, or whether the means differ by an offset.
|
WilcoxonRankSumDistribution |
Compute the exact distribution of the Wilcoxon rank sum test statistic.
|
WilcoxonSignedRank |
The Wilcoxon signed rank test tests,
for the one-sample case, the median of the distribution against a hypothetical median, and
for the two-sample case, the equality of medians of groups.
|
WilcoxonSignedRankDistribution |
Compute the exact distribution of the Wilcoxon signed rank test statistic.
|
XiTanLiu2010a |
Xi, Tan and Liu proposed two simple algorithms to generate gamma random numbers based on
the ratio-of-uniforms method and logarithmic transformations of gamma random variable.
|
XiTanLiu2010b |
Xi, Tan and Liu proposed two simple algorithms to generate gamma random numbers based on
the ratio-of-uniforms method and logarithmic transformations of gamma random variable.
|
XtAdaptedFunction |
This represents an Ft-adapted function that depends only on X(t).
|
ZangwillMinimizer |
Zangwill's algorithm is an improved version of Powell's algorithm.
|
ZeroDriftVector |
This class represents a 0 drift function.
|
ZeroPenalty |
This is a dummy zero cost (no cost) penalty function.
|
Ziggurat2000 |
The Ziggurat algorithm is an algorithm for pseudo-random number sampling from the Normal distribution.
|
Ziggurat2000Exp |
This implements the ziggurat algorithm to sample from the exponential
distribution.
|
Zignor2005 |
This is an improved version of the Ziggurat algorithm as proposed in the reference.
|