# max/min di funzioni di una o piu' variabili # funzione "optim" # HELP: # General-purpose optimization based on Nelder-Mead, quasi-Newton and conjugate-gradient algorithms. It includes an option for box-constrained optimization and simulated annealing. # optim(par, fn, gr = NULL, ..., # method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"), # lower = -Inf, upper = Inf, # control = list(), hessian = FALSE) # Arguments: # par: Initial values for the parameters to be optimized over. # fn: A function to be minimized (or maximized), with first argument the vector of parameters over which minimization is to take place. It should return a scalar result. # gr: A function to return the gradient for the "BFGS", "CG" and "L-BFGS-B" methods. If it is NULL, a finite-difference approximation will be used. # For the "SANN" method it specifies a function to generate a new candidate point. If it is NULL a default Gaussian Markov kernel is used. # ...: Further arguments to be passed to fn and gr. # method: The method to be used. See 'Details'. Can be abbreviated. # lower, upper: Bounds on the variables for the "L-BFGS-B" method, or bounds in which to search for method "Brent". # control: A list of control parameters. See 'Details'. # hessian: Logical. Should a numerically differentiated Hessian matrix be returned? # Details: # Note that arguments after ... must be matched exactly. # By default optim performs minimization, but it will maximize if control$fnscale is negative. # The default method is an implementation of that of Nelder and Mead (1965), that uses only function values and is robust but relatively slow. It will work reasonably well for non-differentiable functions. # Method "BFGS" is a quasi-Newton method (also known as a variable metric algorithm), specifically that published simultaneously in 1970 by Broyden, Fletcher, Goldfarb and Shanno. This uses function values and gradients to build up a picture of the surface to be optimized. # Method "CG" is a conjugate gradients method based on that by Fletcher and Reeves (1964) (but with the option of Polak-Ribiere or Beale-Sorenson updates). Conjugate gradient methods will generally be more fragile than the BFGS method, but as they do not store a matrix they may be successful in much larger optimization problems. # Method "L-BFGS-B" is that of Byrd et. al. (1995) which allows box constraints, that is each variable can be given a lower and/or upper bound. The initial value must satisfy the constraints. This uses a limited-memory modification of the BFGS quasi-Newton method. If non-trivial bounds are supplied, this method will be selected, with a warning. # Method "SANN" is by default a variant of simulated annealing given in Belisle (1992). Simulated-annealing belongs to the class of stochastic global optimization methods. It uses only function values but is relatively slow. It will also work for non-differentiable functions. This implementation uses the Metropolis function for the acceptance probability. By default the next candidate point is generated from a Gaussian Markov kernel with scale proportional to the actual temperature. If a function to generate a new candidate point is given, method "SANN" can also be used to solve combinatorial optimization problems. Temperatures are decreased according to the logarithmic cooling schedule as given in Belisle (1992, p. 890); specifically, the temperature is set to temp / log(((t-1) %/% tmax)*tmax + exp(1)), where t is the current iteration step and temp and tmax are specifiable via control, see below. Note that the "SANN" method depends critically on the settings of the control parameters. It is not a general-purpose method but can be very useful in getting to a good value on a very rough surface. # Method "Brent" is for one-dimensional problems only, using optimize(). It can be useful in cases where optim() is used inside other functions where only method can be specified, such as in mle from package stats4. # Function fn can return NA or Inf if the function cannot be evaluated at the supplied value, but the initial value must have a computable finite value of fn. (Except for method "L-BFGS-B" where the values should always be finite.) # optim can be used recursively, and for a single parameter as well as many. It also accepts a zero-length par, and just evaluates the function with that argument. # The control argument is a list that can supply any of the following components: # trace: Non-negative integer. If positive, tracing information on the progress of the optimization is produced. Higher values may produce more tracing information: for method "L-BFGS-B" there are six levels of tracing. (To understand exactly what these do see the source code: higher levels give more detail.) # fnscale: An overall scaling to be applied to the value of fn and gr during optimization. If negative, turns the problem into a maximization problem. Optimization is performed on fn(par)/fnscale. # parscale: A vector of scaling values for the parameters. Optimization is performed on par/parscale and these should be comparable in the sense that a unit change in any element produces about a unit change in the scaled value. Not used (nor needed) for method = "Brent". # ndeps: A vector of step sizes for the finite-difference approximation to the gradient, on par/parscale scale. Defaults to 1e-3. # maxit: the maximum number of iterations. Defaults to 100 for the derivative-based methods, and 500 for "Nelder-Mead". For "SANN" maxit gives the total number of function evaluations: there is no other stopping criterion. Defaults to 10000. # abstol: The absolute convergence tolerance. Only useful for non-negative functions, as a tolerance for reaching zero. # reltol: Relative convergence tolerance. The algorithm stops if it is unable to reduce the value by a factor of reltol * (abs(val) + reltol) at a step. Defaults to sqrt(.Machine$double.eps), typically about 1e-8. # alpha, beta, gamma: Scaling parameters for the "Nelder-Mead" method. alpha is the reflection factor (default 1.0), beta the contraction factor (0.5) and gamma the expansion factor (2.0). # REPORT: The frequency of reports for the "BFGS", "L-BFGS-B" and "SANN" methods if control$trace is positive. Defaults to every 10 iterations for "BFGS" and "L-BFGS-B", or every 100 temperatures for "SANN". # type: for the conjugate-gradients method. Takes value 1 for the Fletcher-Reeves update, 2 for Polak-Ribiere and 3 for Beale-Sorenson. # lmm: is an integer giving the number of BFGS updates retained in the "L-BFGS-B" method, It defaults to 5. # factr: controls the convergence of the "L-BFGS-B" method. Convergence occurs when the reduction in the objective is within this factor of the machine tolerance. Default is 1e7, that is a tolerance of about 1e-8. # pgtol: helps control the convergence of the "L-BFGS-B" method. It is a tolerance on the projected gradient in the current search direction. This defaults to zero, when the check is suppressed. # temp: controls the "SANN" method. It is the starting temperature for the cooling schedule. Defaults to 10. # tmax: is the number of function evaluations at each temperature for the "SANN" method. Defaults to 10. # Any names given to par will be copied to the vectors passed to fn and gr. Note that no other attributes of par are copied over. # Value: a list with components: # # par: The best set of parameters found. # value: The value of fn corresponding to par. # counts: A two-element integer vector giving the number of calls to fn and gr respectively. This excludes those calls needed to compute the Hessian, if requested, and any calls to fn to compute a finite-difference approximation to the gradient. # convergence: An integer code. # 0 indicates successful completion (which is always the case for "SANN" and "Brent"). Possible error codes are # 1 # indicates that the iteration limit maxit had been reached. # 10 # indicates degeneracy of the Nelder-Mead simplex. # 51 # indicates a warning from the "L-BFGS-B" method; see component message for further details. # 52 # indicates an error from the "L-BFGS-B" method; see component message for further details. # message: A character string giving any additional information returned by the optimizer, or NULL. # hessian: Only if argument hessian is true. A symmetric matrix giving an estimate of the Hessian at the solution found. Note that this is the Hessian of the unconstrained problem even if the box constraints are active. # Note: # optim will work with one-dimensional pars, but the default method does not work well (and will warn). Method "Brent" uses optimize and needs bounds to be available; "BFGS" often works well enough if not. # 1) massimizzare # f(x) = exp(- ||x|| ^ 2), max in 0 # oss: l'argomento della funzione da passare deve essere un vettore # usare persp e contour per rappresentare il grafico di una funzione (dim 2) # FUN(x, y)exp(- 3 * x ^ 2 + y ^ 2) # NON da usare in optim # theta, phi - angles defining the viewing direction. theta gives the azimuthal direction and phi the colatitude. # FUN(x, y)exp(- x ^ 2 - y ^ 2) # NON da usare in optim # dimensione 2 - partendo da (1, 1) # partendo da c(5, 5): funzione quasi 0 li intorno # come "controllare" optim? opzione "control" # trace: Non-negative integer. If positive, tracing information on the progress of the optimization is produced. Higher values may produce more tracing information # fnscale: An overall scaling to be applied to the value of fn and gr during optimization. If negative, turns the problem into a maximization problem. Optimization is performed on fn(par)/fnscale. # ndeps: A vector of step sizes for the finite-difference approximation to the gradient, on par/parscale scale. Defaults to 1e-3. # maxit: The maximum number of iterations. Defaults to 100 for the derivative-based methods, and 500 for "Nelder-Mead". For "SANN" maxit gives the total number of function evaluations: there is no other stopping criterion. Defaults to 10000. # abstol: The absolute convergence tolerance. Only useful for non-negative functions, as a tolerance for reaching zero. # reltol: Relative convergence tolerance. The algorithm stops if it is unable to reduce the value by a factor of reltol * (abs(val) + reltol) at a step. Defaults to sqrt(.Machine$double.eps), typically about 1e-8. # REPORT: The frequency of reports for the "BFGS", "L-BFGS-B" and "SANN" methods if control$trace is positive. Defaults to every 10 iterations for "BFGS" and "L-BFGS-B", or every 100 temperatures for "SANN". # type: for the conjugate-gradients method. Takes value 1 for the Fletcher-Reeves update, 2 for Polak-Ribiere and 3 for Beale-Sorenson. # aumentare il numero di iterazioni, "scalare" la funzione # cambiare il metodo # "BFGS"" # "L-BFGS-B" (e' possibile indicare vincoli sulle variabili) # "CG" # "SANN" -> ottimizzazione stocastica # 2) aggiungere il gradiente -> per i metodi che usano il gradiente) # gradiente: # "BFGS" #Hessiano # tolleranza relativa # "CG" # aumentando la dimensione del problema? # dim = 3 # dim = 4 # dim = 10 # 3) test di efficienza: scelta casuale del punto iniziale nell ipercubo [li, ls]^dim # 4) minimo della funzione di Rosenbrock # (Rosenbrock Banana function) # f(x, y) = 100 * (y - x ^ 2) ^ 2 + (1 - x) ^ 2 # min in (1, 1), valore ottimo 0 FUN <- function(x) { x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1) ^ 2 + (1 - x1) ^ 2 } # plot f1 <- function(x, y)100 * (y - x * x) ^ 2 + (1 - x) ^ 2 xx <- seq(-1.5, 2, by = 0.1) yy <- seq(0, 3, by = 0.1) zz <- outer(X = xx, Y = yy, FUN = f1) persp(x = xx, y = yy, z = zz, theta = 150, phi = 30, col = "lightblue", axes = TRUE, ticktype = "detailed") -> plot1 points(trans3d(x = 1, y = 1, z = 0, pmat = plot1), pch = 16, col = "red") # Gradiente FUNG <- function(x) { x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } # partendo da (-1.2, 1) points(trans3d(x = -1.2, y = 1, z = FUN(c(-1.2, 1)), pmat = plot1), pch = 16, col = "red") FUN.p <- function(x) { x1 <- x[1] x2 <- x[2] res <- 100 * (x2 - x1 * x1) ^ 2 + (1 - x1) ^ 2 points(trans3d(x = x[1], y = x[2], z = res, pmat = plot1), pch = 16, col = "red") return(res) } optim(par = c(-1.2, 1), fn = FUN.p) # ok optim(par = c(-1.2, 0), fn = FUN.p) # ok optim(par = c(0, 3), fn = FUN.p) # ok # Hessiano (res <- optim(par = c(-1.2, 1), fn = FUN, gr = FUNG, method = "BFGS", control = list(trace = TRUE))) optimHess(res$par, FUN, FUNG) det(optimHess(res$par, FUN, FUNG)) # Hessiano esatto: # [802, -400\\ # -400, 200] ## ## minimi quadrati non-lineari (fitting di un insieme di punti con una curva) ## ## y = f(x, theta) + eps ## min_theta sum_i wi * (yi - f(xi, theta) ) ^ 2 ## theta = vettore di parametri ## f(x, theta) = formula (modello) ## wi= pesi ## # 5) fitting delle probabilita' di sopravvivenza (tavola di mortalita' proiettata IPS55) da eta' 40 con il modello di Gompertz # mu(x) = B * C ^ x # tpx = exp(- B * C ^ 40 (C ^ t - 1) / log C) # directory setwd("inserire directory qui o cambiare working directory tramite Rstudio") # caricare tavola # calcolare tp40 da lx (la prima corrisponde alla riga 41) # costruire data.frame con durate e prob. di sopravvivenza # con "optim" # x[1] = B, x[2] = C # esercizio: inserire il gradiente # plot di modello vs dati plot(40 + mt40$'t', tp40.gompertz(RES.optim$par, mt40$'t'), type = 'l', xlab = 'x+t', ylab = 'tp40') points(mt40$'tp40' ~ c(40 + mt40$'t'), col = 'red', pch = '+') # usando la funzione "nls" # nls(formula, data, start, control, algorithm, # trace, subset, weights, na.action, model, # lower, upper, ...) # formula: a nonlinear model formula including variables and parameters. Will be coerced to a formula if necessary. # data an optional data frame in which to evaluate the variables in formula and weights. Can also be a list or an environment, but not a matrix. # start: a named list or named numeric vector of starting estimates. When start is missing, a very cheap guess for start is tried (if algorithm != "plinear"). # control: an optional list of control settings. See nls.control for the names of the settable control values and their effect. # algorithm: character string specifying the algorithm to use. The default algorithm is a Gauss-Newton algorithm. Other possible values are "plinear" for the Golub-Pereyra algorithm for partially linear least-squares models and "port" for the 'nl2sol' algorithm from the Port library - see the references. Can be abbreviated. # trace: logical value indicating if a trace of the iteration progress should be printed. Default is FALSE. If TRUE the residual (weighted) sum-of-squares and the parameter values are printed at the conclusion of each iteration. When the "plinear" algorithm is used, the conditional estimates of the linear parameters are printed after the nonlinear parameters. When the "port" algorithm is used the objective function value printed is half the residual (weighted) sum-of-squares. # subset: an optional vector specifying a subset of observations to be used in the fitting process. # weights: an optional numeric vector of (fixed) weights. When present, the objective function is weighted least squares. # na.action: a function which indicates what should happen when the data contain NAs. The default is set by the na.action setting of options, and is na.fail if that is unset. The 'factory-fresh' default is na.omit. Value na.exclude can be useful. # model: logical. If true, the model frame is returned as part of the object. Default is FALSE. # lower, upper: vectors of lower and upper bounds, replicated to be as long as start. If unspecified, all parameters are assumed to be unconstrained. Bounds can only be used with the "port" algorithm. They are ignored, with a warning, if given for other algorithms. # Usare nls.control per controllare l'ottimizzazione # nls.control(maxiter = 50, tol = 1e-05, minFactor = 1/1024, printEval = FALSE, warnOnly = FALSE) # Arguments # maxiter: A positive integer specifying the maximum number of iterations allowed. # tol: A positive numeric value specifying the tolerance level for the relative offset convergence criterion. # minFactor: A positive numeric value specifying the minimum step-size factor allowed on any step in the iteration. The increment is calculated with a Gauss-Newton algorithm and successively halved until the residual sum of squares has been decreased or until the step-size factor has been reduced below this limit. # printEval: a logical specifying whether the number of evaluations (steps in the gradient direction taken each iteration) is printed. # warnOnly: a logical specifying whether nls() should return instead of signalling an error in the case of termination before convergence. Termination before convergence happens upon completion of maxiter iterations, in the case of a singular gradient, and in the case that the step-size factor is reduced below minFactor # 6) fitting using nls ~ # default control parameters # control <- nls.control(maxiter = 1000, tol = 1e-07, minFactor = 1/1024, printEval = FALSE, warnOnly = TRUE) # usando l'algoritmo "port' ## ## stima di massima verosimiglianza ## # 7) stimare con ml dati che provengono da una distribuzione gamma parametri shape = 2, rate = 0.5, dimensione campione 100 # usare come punto iniziale la stima col metodo dei momenti m1 <- mean(x); m2 <- mean(x ^ 2) theta0.0 <- m1 ^ 2 / (m2 - m1 ^ 2) # shape theta1.0 <- m1 / (m2 - m1 ^ 2) # rate # esercizio: aggiungere il gradiente # plot # punto di partenza, ottimo, valore vero