Διδακτορικές διατριβές
Μόνιμο URI για αυτήν τη συλλογήhttps://pyxida.aueb.gr/handle/123456789/14
Περιήγηση
Πλοήγηση Διδακτορικές διατριβές ανά Τίτλο
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Α Β Γ Δ Ε Ζ Η Θ Ι Κ Λ Μ Ν Ξ Ο Π Ρ Σ Τ Υ Φ Χ Ψ Ω
Τώρα δείχνει 1 - 20 από 42
- Αποτελέσματα ανά σελίδα
- Επιλογές ταξινόμησης
Τεκμήριο A Bayesian approach to the analysis of infectious disease data using continuous-time stochastic models(19-09-2023) Μπαρμπουνάκης, Πέτρος; Barmpounakis, Petros; Athens University of Economics and Business, Department of Statistics; Dellaportas, Petros; Karlis, Dimitrios; Sypsa, Vana; Kontoyannis, Ioannis; Kalogeropoulos, Kostas; Ntzoufras, Ioannis; Demiris, NikolaosΣτόχος της παρούσας διδακτορικής διατριβής είναι η ανάπτυξη στοχαστικών επιδημικών μοντέλων με έμφαση στις μολυσματικές ασθένειες σε ανθρώπους και ζώα. Αναπτύσσεται συγκεκριμένη στατιστική μεθοδολογία για να ενημερώνεικαλύτερα τις δημόσιες πολιτικές υγείας και τις επικοινωνίες που υλοποιούνται από τις κυβερνητικές οργανώσεις, ειδικά κατά τη διάρκεια κρίσεων, όπως η πανδημία του Covid-19.Τεκμήριο Actuarial modelling of claim counts and losses in motor third party liability insurance(07-2013) Tzougas, George J.; Τζουγάς, Γεώργιος Ι.; Athens University of Economics and Business, Department of Statistics; Frangos, NikolaosActuarial science is the discipline that deals with uncertain events where clearly theconcepts of probability and statistics provide for an indispensable instrument in themeasurement and management of risks in insurance and finance. An important aspectof the business of insurance is the determination of the price, typically calledpremium, to pay in exchange for the transfer of risks. It is the duty of the actuary toevaluate a fair price given the nature of the risk. Actuarial literature research covers awide range of actuarial subjects among which is risk classification and experiencerating in motor third-party liability insurance, which are the driving forces of theresearch presented in this thesis. This is an area of applied statistics that has beenborrowing tools from various kits of theoretical statistics, notably empirical Bayes,regression, and generalized linear models, GLM, (Nelder and Wedderburn, 1972).However, the complexity of the typical application, featuring unobservable riskheterogeneity, imbalanced design, and nonparametric distributions, inspiredindependent fundamental research under the label `credibility theory', now acornerstone in contemporary insurance mathematics. Our purpose in this thesis is tomake a contribution to the connection between risk classification and experiencerating with generalized additive models for location scale and shape, GAMLSS,(Rigby and Stasinopoulos, 2005) and finite mixture models (Mclachlan and Peel,2000). In Chapter 1, we present a literature review of statistical techniques that can bepractically implemented for pricing risks through ratemaking based on a priori riskclassification and experience rated or Bonus-Malus Systems. The idea behind a prioririsk classification is to divide an insurance portfolio into different classes that consistof risks with a similar profile and to design a fair tariff for each of them. Recentactuarial literature research assumes that the risks can be rated a priori usinggeneralized linear models GLM, (see, for example, Denuit et al., 2007 & Boucher etal., 2007, 2008). Typical response variables involved in this process are the number ofclaims (or the claim frequency) and its corresponding severity (i.e. the amount theinsurer paid out, given that a claim occurred). In Chapter 2, we extend this setupfollowing the GAMLSS approach of Rigby and Stasinopoulos (2005). The GAMLSSmodels extend GLM framework allowing joint modeling of location and shapeparameters. Therefore both mean and variance may be assessed by choosing a marginal distribution and building a predictive model using ratemaking factors asindependent variables. In the setup we consider, risk heterogeneity is modeled as thedistribution of frequency and cost of claims changes between clusters by a function ofthe level of ratemaking factors underlying the analyzed clusters. GAMLSS modelingis performed on all frequency and severity models. Specifically, we model the claimfrequency using the Poisson, Negative Binomial Type II, Delaporte, Sichel and Zero-Inflated Poisson GAMLSS and the claim severity using the Gamma, Weibull, WeibullType III, Generalized Gamma and Generalized Pareto GAMLSS as these models havenot been studied in risk classification literature. The difference between these modelsis analyzed through the mean and the variance of the annual number of claims and thecosts of claims of the insureds, who belong to different risk classes. The resulting apriori premiums rates are calculated via the expected value and standard deviationprinciples with independence between the claim frequency and severity componentsassumed. However, in risk classification many important factors cannot be taken intoaccount a priori. Thus, despite the a priori rating system, tariff cells will not becompletely homogeneous and may generate a ratemaking structure that is unfair to thepolicyholders. In order to reduce the gap between the individual's premium and riskand to increase incentives for road safety, the individual's past record must taken intoconsideration under an a posteriori model. Bonus-Malus Systems (BMSs) are aposteriori rating systems that penalize insureds responsible for one or more accidentsby premium surcharges or maluses and reward claim-free policyholders by awardingthem discounts or bonuses. A basic interest of the actuarial literature is theconstruction of an optimal or `ideal' BMS defined as a system obtained throughBayesian analysis. A BMS is called optimal if it is financially balanced for theinsurer: the total amount of bonuses must be equal to the total amount of maluses andif it is fair for the policyholder: the premium paid by each policyholder is proportionalto the risk that they impose on the pool. The study of such systems based on differentstatistical models will be the main objective of this thesis. In Chapter 3, we extend thecurrent BMS literature using the Sichel distribution to model the claim frequencydistribution. This system is proposed as an alternative to the optimal BMS obtained bythe Negative Binomial model (see, Lemaire, 1995). We also consider the optimalBMS provided by the Poisson-Inverse Gaussian distribution, which is a special caseof the Sichel distribution. Furthermore, we introduce a generalized BMS that takesinto account both the a priori and a posteriori characteristics of each policyholder, extending the framework developed by Dionne and Vanasse (1989, 1992). This isachieved by employing GAMLSS modeling on all the frequency models consideredin this chapter, i.e. the Negative Binomial, Sichel and Poisson-Inverse Gaussianmodels. In the above setup optimality is achieved by minimizing the insurer's risk.The majority of optimal BMSs in force assign to each policyholder a premium basedon their number of claims disregarding their aggregate amount. In this way, apolicyholder who underwent an accident with a small size of loss will be unfairlypenalized in comparison to a policyholder who had an accident with a big size of loss.Motivated by this, the first objective of Chapter 4 is the integration of claim severityinto the optimal BMSs based on the a posteriori criteria of Chapter 3. For this purposewe consider that the losses are distributed according to a Pareto distribution,following the setup used by Frangos and Vrontos (2001). The second objective ofChapter 4 is the development of a generalized BMS with a frequency and a severitycomponent when both the a priori and the a posteriori rating variables are used. Forthe frequency component we assume that the number of claims is distributedaccording to the Negative Binomial Type I, Poisson Inverse Gaussian and SichelGAMLSS. For the severity component we consider that the losses are distributedaccording to a Pareto GAMLSS. This system is derived as a function of the years thatthe policyholder is in the portfolio, their number of accidents, the size of loss of eachof these accidents and of the statistically significant a priori rating variables for thenumber of accidents and for the size of loss that each of these claims incurred.Furthermore, we present a generalized form of the one obtained in Frangos andVrontos (2001). Finally, in Chapter 5 we give emphasis on both the analysis of theclaim frequency and severity components of an optimal BMS using finite mixtures ofdistributions and regression models (see Mclachlan and Peel, 2000 & Rigby andStasinopoulos, 2009) as these methods, with the exception of Lemaire(1995), have notbeen studied in the BMS literature. Specifically, for the frequency component weemploy a finite Poisson, Delaporte and Negative Binomial mixture, while for theseverity component we employ a finite Exponential, Gamma, Weibull andGeneralized Beta Type II (GB2) mixture, updating the posterior probability. We alsoconsider the case of a finite Negative Binomial mixture and a finite Pareto mixtureupdating the posterior mean. The generalized BMSs we propose adequately integraterisk classification and experience rating by taking into account both the a priori and aposteriori characteristics of each policyholder.Τεκμήριο Adaptive designs in phase II clinical trials(23-09-2013) Poulopoulou, Stavroula; Πουλοπούλου, Σταυρούλα; Athens University of Economics and Business. Department of Statistics; Karlis, Dimitrios; Dafni, UraniaClinical trials play a very important role in the development process of new therapies. Recently there has been a rapid increase in theresearch and creation of new modern molecular agents, which makes necessary the development of more flexible and adaptive designs forthe implementation of clinical trials. The objective of adaptive designs is to ensure direct and dynamic control of the effectiveness and thesafety of a new treatment by allowing the adjustment of the elements of the study (i.e sample size), during the study, in such a way that wewill not sacrifice elements which are associated with the credibility of the study (i.e statistical power) and also issues which concern ethicalcharacteristics of the clinical trials.Τεκμήριο Affine models: change point detection and applications(12/17/2021) Bisiotis, Konstantinos; Μπισιώτης, Κωνσταντίνος; Athens University of Economics and Business, Department of Statistics; Yannacopoulos, Athanasios; Tzavalis, Elias; Vrontos, Ioannis; Tsekrekos, Andrianos; Weber, Gerhard-Wilhelm; Moguerza, Javier M.; Psarakis, SteliosThe purpose of the present thesis is the application of statistical process control (SPC) techniques, specifically control charts in Gaussian affine term structure models (ATSM). In recent years SPC methods have been widely in several non-industrial scientific areas such as in finance. Gaussian ATSMs under no-arbitrage conditions have been a very important research tool in the area of term structure of interest rates.In our work we propose several control chart procedures that develop the ATSMs from the change point perspective. First, we construct control chart procedures for monitoring the parameters of an ATSM and examine their ability of detecting changes for various types of shifts in the yield curve. Also, we propose a technique for reestimating the target process of the control chart procedure in case of a detection of a change. The results show that there is no single chart that performs well in all types of shifts but a combination of control chart is needed. Second, we extent the class of term structure models estimated using the minimum chi-square estimation (MCSE) method by constructing fixed-income government bond portfolios. The proposed bond portfolio strategies from the ATSM in most of the cases perform better than traditional bond portfolio strategies. Next, the control charts are applied for monitoring the optimal portfolio weights. Third, we propose and construct control charts for monitoring shifts in the autoregressive and moving average matrix of a VARMA ATSM. In the estimation procedure of the model in the first step, among standard estimation procedures, we apply a minimum distance estimation method based on the impulse responses and define its advantages in the forecasting of the yield curve. In the second step, we estimate the market prices of risk.Τεκμήριο Application of Copula functions in statistics(09-2007) Nikoloulopoulos, Aristidis; Νικολουλόπουλος, Αριστείδης; Athens University of Economics and Business, Department of Statistics; Karlis, DimitriosStudying associations among multivariate outcomes is an interesting problem in statistical science. The dependence between random variables is completely described by their multivariate distribution. When the multivariate distribution has a simple form, standard methods can be used to make inference. On the other hand one may create multivariate distributions based on particular assumptions, limiting thus their use. Unfortunately, these limitations occur very often when working with multivariate discrete distributions. Some multivariate discrete distributions used in practice can have only certain properties, as for example they allow only for positive dependence or they can have marginal distributions of a given form. To solve this problem copulas seem to be a promising solution. Copulas are a currently fashionable way to model multivariate data as they account for the dependence structure and provide a flexible representation of the multivariate distribution. Furthermore, for copulas the dependence properties can be separated from their marginal properties and multivariate models with marginal densities of arbitrary form can be constructed, allowing a wide range of possible association structures. In fact they allow for flexible dependence modelling, different from assuming simple linear correlation structures. However, in the application of copulas to discrete data marginal parameters affect dependence structure, too, and, hence the dependence properties are not fully separated from the marginal properties. Introducing covariates to describe the dependence by modelling the copula parameters is of special interest in this thesis. Thus, covariate information can describe the dependence either indirectly through the marginal parameters or directly through the parameters of the copula . We examine the case when the covariates are used both in marginal and/or copula parameters aiming at creating a highly flexible model producing very elegant dependence structures. Furthermore, the literature contains many theoretical results and families of copulas with several properties but there are few papers that compare the copula families and discuss model selection issues among candidate copula models rendering the question of which copulas are appropriate and whether we are able, from real data, to select the true copula that generated the data, among a series of candidates with, perhaps, very similar dependence properties. We examined a large set of candidate copula families taking intoaccount properties like concordance and tail dependence. The comparison is made theoretically using Kullback-Leibler distances between them. We have selected this distance because it has a nice relationship with log-likelihood and thus it can provide interesting insight on the likelihood based procedures used in practice. Furthermore a goodness of fit test based on Mahalanobis distance, which is computed through parametric bootstrap, will be provided. Moreover we adopt a model averaging approach on copula modelling, based on the non-parametric bootstrap. Our intention is not to underestimate variability but add some additional variability induced by model selection making the precision of the estimate unconditional on the selected model. Moreover our estimates are synthesize from several different candidate copula models and thus they can have a flexible dependence structure. Taking under consideration the extended literature of copula for multivariate continuous data we concentrated our interest on fitting copulas on multivariate discrete data. The applications of multivariate copula models for discrete data are limited. Usually we have to trade off between models with limited dependence (e.g. only positive association) and models with flexible dependence but computational intractabilities. For example, the elliptical copulas provide a wide range of flexible dependence, but do not have closed form cumulative distribution functions. Thus one needs to evaluate the multivariate copula and, hence, a multivariate integral repeatedly for a large number of times. This can be time consuming but also, because of the numerical approach used to evaluate a multivariate integral, it may produce roundoff errors. On the other hand, multivariate Archimedean copulas, partially-symmetric m-variate copulas with m-1 dependence parameters and copulas that are mixtures of max-infinitely divisible bivariate copulas have closed form cumulative distribution functions and thus computations are easy, but allow only positive dependence among the random variables.The bridge of the two above-mentioned problems might be the definition of a copula family which has simple form for its distribution functionwhile allowing for negative dependence among the variables. We define sucha multivariate copula family exploiting the use of finite mixture of simple uncorrelated normal distributions. Since the correlation vanishes, the cumulative distribution is simply the product of univariate normal cumulative distribution functions. The mixing operation introduces dependence. Hence we obtain a kind of flexible dependence, and allow for negative dependence.Τεκμήριο Application of hidden Markov and related models to earthquake studies(2015) Orfanogiannaki, Aikaterini M.; Ορφανογιαννάκη, Αικατερίνη Μ.; Athens University of Economics and Business, Department of Statistics; Karlis, DimitriosDiscrete valued hidden Markov Models (HMMs) are used to model time series of event counts in several scientific fields like genetics, engineering, seismology and finance. In its general form the model consists of two parts: the observation sequence and an unobserved sequence of hidden states that underlies the data and consist a Markov chain. Each state is characterized by a specific distribution and the progress of the hidden process from state to state is controlled by a transition probability matrix. We extend the theory of HMMs to the multivariate case and apply them to seismological data fromdifferent seismotectonic environments. This extension is not straightforward and it is achieved gradually by assuming different multivariate distributions to describe each state of the model.Τεκμήριο Applications of stochastic analysis in sensitivity analysis and insurance mathematics(13-10-2016) Roumelioti, Eleni E.; Ρουμελιώτη, Ελένη Ε.; Athens University of Economics and Business, Department of Statistics; Zazanis, M.This thesis deals primarily with the use of Malliavin calculus techniques in estimatingthe sensitivity of functionals of diffusion processes.Τεκμήριο Bayesian analysis and model selection for contingency tables using power priors(21-03-2022) Μαντζούνη, Αικατερίνη; Mantzouni, Katerina; Athens University of Economics and Business, Department of Statistics; Karlis, Dimitrios; Kateri, Maria; Tarantola, Claudia; Demiris, Nikolaos; Papastamoulis, Panagiotis; Vasdekis, Vassilis; Ntzoufras, IoannisΚεντρικός πυλώνας της παρούσας διδακτορικής διατριβής είναι η ανάπτυξη προτεινόμενης μεθοδολογίας για τη Μπεϋζιανή ανάλυση κατηγορικών μεταβλητών σε πίνακες συνάφειας με σκοπό την επιλογή του καταλληλότερου μοντέλου. Η προτεινόμενη μεθοδολογία περιλαμβάνει τον καθορισμό κατάλληλων εκ-των-προτέρων κατανομών, καθώς επίσης και υπολογιστικές τεχνικές για την εκτίμηση Μπεϋζιανών περιθώριων πιθανοφανειών, οι οποίες είναι απαραίτητες για τον υπολογισμό των εκ-των-υστέρων κατανομών στην Μπεϋζιανή σύγκριση και επιλογή του καταλληλότερου μοντέλου. Πιο συγκεκριμένα, η επιλογή κατάλληλης εκ-των-προτέρων κατανομής στη Μπεϋζιανή σύγκριση μοντέλων και των σχετικών ελέγχων είναι πολλές φορές προβληματική λόγω του γνωστού προβλήματος ευαισθησίας των εκ των υστέρων πιθανοτήτων και του παραδόξου των Barlett-Lindley. Το γεγονός αυτό οδήγησε στην ανάπτυξη αντικειμενικών Μπεϋζιανών τεχνικών, οι οποίες προτείνουν τη χρήση μη πληροφοριακών εκ-των-προτέρων κατανομών, όταν δεν υπάρχει καμιά εκ-των-προτέρων πληροφορία για τα δεδομένα. Σε αυτο το πλαίσιο προτείνονται οι εκ-των-προτέρων κατανομές δύναμης. Για την εφαρμογή της προτεινόμενης μεθοδολογίας σε πίνακες συνάφειας, που στόχο έχει την επιλογή του καταλληλότερου μοντέλου συνάφειας, κατασκευάστηκαν δύο σενάρια εκ-των-προτέρων κατανομών με τη χρήση πλασματικών δεδομένων, τα οποία βασίστηκαν στις εκ-των-προτέρων κατανομές δύναμης. Εισάγουμε και εξετάζουμε δύο προτεινόμενους Μόντε Κάρλο εκτιμητές. Όλες οι τεχνικές εφαρμόστηκαν και ελέγχθηκαν σε πραγματικά δεδομένα αλλά και σε αναλυτικές μελέτες προσομοίωσης. Για να ελεγχθεί η εγκυρότητα της προτεινόμενης μεθοδολογίας χρησιμοποιήθηκαν κριτήρια αντικειμενικών μεθόδων Bayes, όπως συνέπεια επιλογής μοντέλων, συνέπεια πληροφορίας και το κριτήριο της αντιστοίχισης προβλεπτικών κατανομών. Τέλος, παρουσιάζεται η επέκταση της μεθοδολογίας στη χρήση μεθόδων Μπεϋζιανής ανάλυσης γραφικών μοντέλων σε πίνακες συνάφειας τριπλής εισόδου χρησιμοποιώντας εκ-των-προτέρων κατανομές δύναμης. Σε κάθε μοντέλο υπό συνθήκη ανεξαρτησίας αντιστοιχείται μια συγκεκριμένη παραγοντοποίηση των πιθανοτήτων των κελιών και εφαρμόζεται συζυγής ανάλυση, βασιζόμενη σε Dirichlet εκ-των-προτέρων κατανομές. Εκ-των-προτέρων κατανομές μοναδιαίας ερμηνευτικής πληροφορίας χρησιμοποιούνται σαν μέτρο σύγκρισης με στόχο να ελεγχθεί και να ερμηνευθεί η επίδραση οποιονδήποτε εκ-των-προτέρων κατανομών στον παράγοντα Bayes και κατ’ επέκταση στην διαδικασία επιλογής γραφικών μοντέλων.Τεκμήριο Bayesian evidence synthesis for the analysis of biomedical data(31-05-2024) Αψεμίδης, Αναστάσιος; Apsemidis, Anastasios; Athens University of Economics and Business, Department of Statistics; Vasdekis, Vassilis; Kalogeropoulos, Kostas; Ntzoufras, Ioannis; Karlis, Dimitrios; Kyriakidis, Epaminondas; Kypraios, Theodore; Demiris, NikolaosΣτην εποχή άνθησης της Στατιστικής και της Επιστήμης των Δεδομένων, η ανάλυση βιοϊατρικών δεδομένων κερδίζει συνεχώς την προσοχή ερευνητών και επαγγελματιών, οι οποίοι προσπαθούν να χρησιμοποιήσουν την πληθώρα πληροφορίας σε διαδικασίες λήψης αποφάσεων. Η Μπεϋζιανή μεθοδολογία, της οποίας η δημοτικότητα έχει επίσης αυξηθεί τις τελευταίες δεκαετίες λόγω της υπολογιστικής και στατιστικής προόδου σε μεθόδους προσομοίωσης Μόντε Κάρλο, παρέχει ένα συνεκτικό πλαίσιο σύνθεσης πληροφορίας από διαφορετικές πηγές. Έτσι, στοχεύουμε στη χρήση Μπεϋζιανών μοντέλων, για να εκτιμήσουμε σημαντικές ποσότητες στα πεδία τόσο των λοιμωδών όσο και των μη λοιμωδών ασθενειών. Όσον αφορά τις λοιμώδεις ασθένειες, ασχολούμαστε με την πανδημία Covid-19 και, συγκεκριμένα, κατασκευάζουμε στοχαστικά διαμερισματικά μοντέλα διακριτού χρόνου βασισμένα στο λανθάνον επίπεδο των καταγεγραμμένων και μη κρουσμάτων, ώστε να εκτιμήσουμε τo ρυθμό αναπαραγωγής και το ποσοστό των παρατηρούμενων κρουσμάτων. Επίσης, αντιμετωπίζουμε το πρόβλημα υπό το πρίσμα των δυναμικών συστημάτων με στόχο την ανάπτυξη διορατικότητας, αλλά και την κατασκευή ποσοτήτων κατάλληλων για υποστήριξη λήψης αποφάσεων. Στο πλαίσιο των μη λοιμωδών ασθενειών, προτείνουμε μεθόδους παρεκβολής της καμπύλης επιβίωσης, λαμβάνοντας υπόψη προβολές της θνησιμότητας, με στόχο να εκτιμήσουμε τα χρόνια ζωής που κερδίζονται, όταν εφαρμόζεται μία θεραπεία αντί κάποιας άλλης. Η μεθοδολογία παρουσιάζεται μέσα από τρία παραδείγματα που απασχολούν την ιατρική κοινότητα και αφορούν τον καρκίνο του μαστού, το μεταστατικό μελάνωμα και την καρδιακή αρρυθμία.Τεκμήριο Bayesian model determination and nonlinear threshold volatility modelsPetralias, Athanassios; Πετραλιάς, Αθανάσιος; Athens University of Economics and Business, Department of Statistics; Dellaportas, Petros; Ntzoufras, IoannisThe purpose of this Thesis is to document an original contribution in the areas of model determination and volatility modeling. Model determination is the procedure that evaluates the ability of competing hypothesized models to describe a phenomenon under study. Volatility modeling in the present context, involves developing models that can adequately describe the volatility process of a financial time series. In this Thesis we focus on the development of efficient algorithms for Bayesian model determination using Markov Chain Monte Carlo (MCMC), which are also used to develop a family of nonlinear flexible models for volatility. We propose a new method for Bayesian model determination that incorporates several desirable characteristics, resulting in better mixing for the MCMC chain and more precise estimates of the posterior density. The new method is compared with various existing methods in an extensive simulation study, as well as more complex model selections problems based on linear regression, with both simulated and real data comprising of 300 to 1000 variables. The method seems to produce rather promising results, overperforming several other existing algorithms in most of the analyzed cases. Furthermore the method is applied to gene selection using logistic regression, with a famous dataset including 3226 genes. The problem lies in identifying the genes related to the presence of a specific form of breast cancer. The new method again proves to be more efficient when compared to an existing Population MCMC sampler, while we extend the findings of previous medical studies on this issue. We present a new class of flexible threshold models for volatility. In these models the variables included, as well as the number and location of the threshold points are estimated, while the exogenous variables are allowed to be observed on lower frequencies than the dependent variable. To estimate these models we use the new method for Bayesian model determination, enriched with new move types, the use of which is validated through additional simulations. Furthermore, we propose a comparative model based on splines, where the number and location of the spline knots is related to a set of exogenous variables. The new models are applied to estimate and predict the variance of the Euro-dollar exchange rate, using as exogenous variables a set of U.S. macroeconomic announcements. The results indicate that the threshold models can provide significantly better estimates and projections than the spline model and typical conditional volatility models, while the most important macroeconomic announcements are identified. The threshold models are then generalised to the multivariate case. Under the proposed methodology, the estimation of the univariate variances is only required, as well as a rather small collection of regression coefficients. This simplifies greatly the inference, while the model is found to perform rather well in terms of predictability. A detailed review of both the available algorithms for Bayesian Model determination and nonlinear models for financial time series is also included in this Thesis. We illustrate how the existing methods for model determination are embedded into a common general scheme, while we discuss the properties and advantages each method has to offer. The main argument presented is that there is no globally best or preferable method, but their relative performance and applicability, depends on the dataset and problem of interest. With respect to the nonlinear models for financial time series and volatility we present in a unified manner, the main parametric and nonparametric classes of these models, while there is also a review of event studies analyzing the effect of news announcements on volatility.Τεκμήριο Bayesian modeling and estimation for complex multiparameter problems with real applications(2021) Koki, Constandina; Κοκή, Κωνσταντίνα; Athens University of Economics and Business, Department of Statistics; Meligkotsidou, Loukia; Karlis, Dimitrios; Dellaportas, Petros; Kypraios, Theodore; Fouskakis, Dimitris; Kalogeropoulos, Kostas; Vrontos, IoannisIn the big data era, the study of complex multiparameter problems is more than necessary. The development of Machine Learning techniques enhanced the inferentialability of statistical models. In this direction, by leveraging Machine Learning techniques, we propose a new predictive Hidden Markov model with exogenous variables, within a Bayesian framework, for joint inference and variable selection. Wepropose a computational Markov Chain Monte Carlo algorithm that offers improved forecasting and variable selection performance, compared to existing benchmarkmodels. Our methodology is applied in various simulated and real datasets, such as realized volatility data and cryptocurrency return series. Furthermore, we exploit the Bayesian methodology in implementing the X-ray luminosity function of the ActiveGalactic Nuclei under the assumption of Poisson errors in the determination of X-ray fluxes and estimation uncertainties.Τεκμήριο Bayesian modelling of high dimensional financial data using latent gaussian modelsAlexopoulos, Angelis N.; Αλεξόπουλος, Αγγελής Ν.; Athens University of Economics and Business, Department of Statistics; Dellaportas, Petros; Papaspiliopoulos, OmirosThe present thesis deals with the problem of developing statistical methodology for modellingand inference of high dimensional financial data. The motivation of our research wasthe identification of infrequent and extreme movements, which are called jumps, in the pricesof the 600 stocks of Euro STOXX index. This is known in the financial and statistical literatureas the problem of separating jumps from the volatility of the underlying process whichis assumed for the evolution of the stock prices.The main contribution of the thesis is the modelling and the development of methodsfor inference on the characteristics of the jumps across multiple stocks, as well as across thetime horizon. Following the Bayesian paradigm we use prior information in order to modela known characteristic of financial crises, which is that jumps in stock prices tend to occurclustered in time and to a↵ect several markets within a sort period of time. An improvementin the prediction of future stock prices has been achieved.The proposed model combines the stochastic volatility (SV) model with a multivariatejump process and belongs to the very broad class of latent Gaussian models. Bayesian inferencefor latent Gaussian models relies on a Markov chain Monte Carlo (MCMC) algorithmwhich alternates sampling from the distribution of the latent states of the model conditionalon the parameters and the observations, and sampling from the distribution of the parametersof the model conditional on the latent states and the observations. In the case of SVmodels with jumps, sampling the latent volatility process of the model is not a new problem.Over the last few years several methods have been proposed for separating the jumps fromthe volatility process but there is not a satisfactory solution yet, since sampling from a highdimensional nonlinear and non-Gaussian distribution is required. In the present thesis wepropose a Metropolis-Hastings algorithm in which we sample the whole path of the volatilityprocess of the model without using any approximation. We compare the resulting MCMCalgorithm with existing algorithms. We apply our proposed methodology on univariate SVwith jumps models in order to identify jumps in the stock prices of the real dataset thatmotivated our research.To model the propagation of the jumps across stocks and across time we combine the SVmodel with a doubly stochastic Poisson process, also known as Cox process. The intensityof the jumps in the Poisson process is modelled using a dynamic factor model. Furthermore,we develop an MCMC algorithm to conduct Bayesian inference for the parameters and thelatent states of the proposed model. We test the proposed methods on simulated data and weapplied them on our real dataset. We compare the prediction of future stock prices using theproposed model with the predictions obtained using existing models. The proposed modelprovides better predictions of future stock prices and this is an indication for a predictablepart of the jump process of SV models.IIIThe MCMC algorithm that is implemented in order to conduct Bayesian inference forthe aforementioned models is also employed on a demographic application. More precisely,within the context of latent Gaussian models we present a novel approach to model andpredict mortality rates of individuals.Τεκμήριο Control charts for some discrete and continuous distributions(17-12-2024) Δεμερτζή, Ελισάβετ; Demertzi, Elisavet; Athens University of Economics and Business, Department of Statistics; Castagliola, Philippe; Koutras, Markos; Moguerza, Javier; Vrontos, Ioannis; Yannacopoulos, Athanasios; Tsiamyrtzis, Panagiotis; Psarakis, SteliosΗ παρούσα διδακτορική διατριβή ασχολείται με διαγράμματα ελέγχου για μεμονωμένες παρατηρήσεις από μη συμμετρικές κατανομές για τις οποίες τα διαγράμματα ελέγχου δεν έχουν κατασκευαστεί ή ερευνηθεί αρκετά στη σχετική βιβλιογραφία του Στατιστικού Ελέγχου Ποιότητας, παρόλο που έχουν πολλές εφαρμογές σε διάφορα πεδία στην καθημερινή μας ζωή. Παραδείγματα της πρώτης περίπτωσης είναι η Λογαριθμική κατανομή, η κατανομή Lindley και οι σχετικές με αυτήν κατανομές, ενώ μια περίπτωση που ανήκει στη δεύτερη κατηγορία είναι η κατανομή Pareto. Αυτή η διδακτορική διατριβή είναι μια προσπάθεια να συμπληρωθεί αυτό το κενό στη βιβλιογραφία.Το πρώτο μέρος της διατριβής παρουσιάζει εισαγωγές στα διαγράμματα ελέγχου αλλά και στις προαναφερθείσες κατανομές καθώς και τη χρησιμότητα των διαγραμμάτων για μεμονωμένες παρατηρήσεις. Στο δεύτερο μέρος κατασκευάζονται διαγράμματα ελέγχου για μεμονωμένες παρατηρήσεις από την αρχική μονοπαραμετρική κατανομή Lindley και μια διπαραμετρική μορφή της, καθώς και για τη Λογαριθμική κατανομή και την κατανομή Pareto I. Η κατασκευή του καθενός από αυτά τα διαγράμματα γίνεται αρχικά με όρια ελέγχου που βασίζονται στην πιθανότητα σφάλματος τύπου Ι. Στη συνέχεια, κατασκευάζονται διαγράμματα ελέγχου τύπου Shewhart καθώς και EWMA διαγράμματα για μεμονωμένες παρατηρήσεις χρησιμοποιώντας δύο διαφορετικές μεθόδους διόρθωσης ασυμμετρίας (μιας και όλες οι κατανομές που μας απασχολούν είναι μη συμμετρικές) έτσι ώστε να βελτιωθεί η συμπεριφορά των προτεινόμενων διαγραμμάτων.Οι συμπεριφορές όλων των διαγραμμάτων ερευνούνται και συγκρίνονται μεταξύ τους σε σχέση με το μέσο μήκος ροής (ARL) και γίνεται επίδειξη αυτής της συμπεριφοράς μέσω προσομoιωμένων αλλά και πραγματικών δεδομένων. Συμπεράσματα και προτάσεις για περαιτέρω έρευνα παρέχονται επίσης στο τελευταίο κεφάλαιο αυτής της διατριβής.Τεκμήριο Discrete, continuous and machine learning models with applications in credit risk(13-09-2023) Γεωργίου, Κυριάκος; Georgiou, Kyriakos; Athens University of Economics and Business, Department of Statistics; Xanthopoulos, Stylianos; Tsekrekos, Andrianos; Zazanis, Michael; Psarakis, Stelios; Siettos, Konstantinos; Weber, Gerhard-Wilhelm; Yannacopoulos, AthanasiosΗ μοντελοποίηση πιστωτικού κινδύνου είναι ένας ταχέως αναπτυσσόμενος και δυναμικός κλάδος των μαθηματικών της χρηματοοικονομικής, με σημαντικές εφαρμογές, όπως έχει αποδειχθεί και ιστορικά. Συγκεκριμένα, η τελευταία οικονομική κρίση κατέστη σαφές ότι τα μοντέλα εκτίμησης πιστωτικού κινδύνου θα πρέπει να χαρακτηρίζονται από μαθηματική ακρίβεια και σαφήνεια. Για τον λόγο αυτόν, τα πρόσφατα Διεθνή Πρότυπα Χρηματοοικονομικής Αναφοράς (ΔΠΧΑ) 9 έχουν εισάγει το πλαίσιο της πρόβλεψης στην εκτίμηση του πιστωτικού κινδύνου, αυξάνοντας μ’ αυτόν τον τρόπο και την ανάγκη για αυστηρή μαθηματική μοντελοποίηση. Σκοπός της παρούσας διδακτορικής διατριβής είναι να αναπτύξει και να εξερευνήσει τα μαθηματικά εργαλεία και μοντέλα που προκύπτουν απ’ αυτήν την ανάγκη, με γνώμονα συγκεκριμένα ανοιχτά προβλήματα που δημιουργούνται με τα νέα πρότυπα, καθώς και να εισάγει ένα πλαίσιο μαθηματικής μοντελοποιήσης που μπορεί να εκμεταλλευτούν οι επαγγελματίες του κλάδου.Η έρευνα ξεκινά με διακριτά μοντέλα, και συγκεκριμένα αλυσίδες Markov, που είναι βαθιά καθιερωμένα εργαλεία στον χώρο του πιστωτικού κινδύνου, αναπτύσσοντας ένα αναγκαίο μαθηματικό πλαίσιο για την αναφορά των πιστωτικών αξιολογήσεων που εξασφαλίζει τη συμμόρφωση με το ΔΠΧΑ. Στην συνέχεια, χρησιμοποιούμε στοχαστικά μοντέλα σε συνεχή χρόνο για την εκτίμηση πιθανοτήτων αθέτησης, αλλά και μελλοντικών πιστωτικών ζημιών. Πιο ειδικά, εξετάζουμε μια οικογένεια μοντέλων που εισάγουν και κρυφές μεταβλητές οι οποίες επηρεάζουν την εξέλιξη ενός πιστωτικού προϊόντος (π.χ., μακροοικονομικές μεταβλητές), και χρησιμοποιούμε τεχνικές βασισμένες σε ολοκληρωτικές και μερικές ολοκληρο-διαφορικές εξισώσεις για να περιγράψουμε και να αποδείξουμε σημαντικές μαθηματικές ιδιότητες των συσχετιζομένων πιθανοτήτων αθέτησης. Για να συνεισφέρουμε στην εφαρμοσιμότητα των προαναφερθέντων μεθοδολογιών, αναπτύσσουμε και εξετάσουμε αριθμητικές μεθόδους για την εκτίμηση των πιθανοτήτων αθέτησης. Χρησιμοποιούμε τις γνωστές τεχνικές διακριτοποίησης στις μερικές ολοκληρο-διαφορικές εξισώσεις που προκύπτουν κάτω από ένα εύρος μοντέλων, δείχοντας την ποικιλία των προβλημάτων που μπορούν να επιλυθούν με αυτές τις τεχνικές. Τέλος, εμπνευσμένοι από σύγχρονη έρευνα στον τομέα της μηχανικής εκμάθησης, θεωρούμε τρόπους με τους οποίους αυτή, και συγκεκριμένα τα μοντέλα νευρωνικών δικτύων (deep neural networks – DNN), μπορούν να χρησιμοποιηθούν για την εκτίμηση των πιθανοτήτων αθέτησης, λύνοντας τις αντίστοιχες εξισώσεις. Ολοκληρώνοντας, εξετάζουμε θεωρητικές και πρακτικές πτυχές αυτών των μοντέλων που πρέπει να λαμβάνονται υπόψιν στην εφαρμογή των μοντέλων αυτών και τη σύγκρισή τους καθιερωμένες αριθμητικές μεθόδους.Τεκμήριο An econometric analysis of high-frequency financial data(12/09/2021) Lamprinakou, Fiori; Λαμπρινάκου, Φιόρη; Athens University of Economics and Business, Department of Statistics; Papaspiliopoulos, Omiros; Demiris, Nikolaos; Pedeli, Xanthi; Papastamoulis, Panagiotis; Tsionas, Mike; Damien, Paul; Dellaportas, PetrosWe present and compare observation driven and parameter driven models for predictinginteger price changes of high-frequency financial data. We explore Bayesian inferencevia Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) for the observationdriven model activity-direction-size (ADS), introduced by Rydberg and Shephard [1998a,2003]. We extend the ADS model by proposing a parameter driven model and use a Bernoulligeneralized linear model (GLM) with a latent process in the mean. We propose a new decompositionmodel that uses trade intervals and is applied on data that allow three possible tickmovements: one tick up price change, one tick down price change, or no price change. Wemodel each component sequentially using a Binomial generalized linear autoregressive movingaverage (GLARMA) model, as well as a GLM with a latent process in the mean. We perform asimulation study to investigate the effectiveness of the proposed parameter driven models usingdifferent algorithms within a Bayesian framework. We illustrate the analysis by modelling thetransaction-by-transaction data of of E-mini Standard and Poor’s (S&P) 500 index futures contracttraded on the Chicago Mercantile Exchange’s Globex platformbetween May 16th 2011 andMay 24th 2011. In order to assess the predictive performance, we compare the mean square error(MSE) and mean absolute error (MAE) criterion, as well as four scalar performance measures,namely, accuracy, sensitivity, precision and specificity derived from the confusion matrix.Τεκμήριο Efficient Bayesian marginal likelihood estimation in generalised linear latent trait models(2013) Vitoratou, Vasiliki; Βιτωράτου, Βασιλική; Athens University of Economics and Business, Department of Statistics; Ntzoufras, IoannisThe term latent variable model (LVM) refers to a broad family of models which are used tocapture abstract concepts (unobserved / latent variables or factors) by means of multipleindicators (observed variables or items). The key idea is that all dependencies among pobserved variables are attributed to k unobserved ones, where k << p. That is, the LVMmethodology is a multivariate analysis technique which aims to reduce the dimensionality,with as little loss of information as possible. Most importantly, the LVMs accountfor constructs that are not directly measurable, as for instance individuals’ emotions,traits, attitudes and perceptions. In the current thesis, the LVMs are studied within theBayesian paradigm, where model evaluation is conducted on the basis of posterior modelprobabilities. A key role in this comparison is played by the models’ marginal likelihood,which is often a high dimensional integral, not available in closed form. The propertiesof the LVMs are implemented here in order to efficiently approximate the marginallikelihood.Τεκμήριο Fault-specific insurance pricing, reserving and CAT bond design for seismic risk assessment; the case of Greece(08-05-2023) Λουλούδης, Εμμανουήλ; Louloudis, Emmanouil; Athens University of Economics and Business, Department of Statistics; Yannacopoulos, Athanasios; Tsekrekos, Andrianos; Psarakis, Stelios; Kyriakidis, Epaminondas; Pinto, Alberto-Adrego; Pinheiro, Diogo; Zimbidis, AlexandrosΚύριος σκοπός της διατριβής είναι η στοχαστική μοντελοποίηση και ποσοτικοποίηση του σεισμικού κινδύνου στο πλαίσιο της ασφάλισης του συγκεκριμένου κινδύνου. Θέτει τη βάση για τις πιο ορθές αναλογιστικές αποφάσεις των εμπλεκόμενων μερών, όπως η τιμολόγηση ασφαλίστρων, ο υπολογισμός του απαιτούμενου κεφαλαίου φερεγγυότητας και ο σχεδιασμός του αντίστοιχου ομολόγου καταστροφής. Τα σεισμικά μοντέλα που χρησιμοποιούνται ευρέως στην ασφαλιστική αγορά (κατά κύριο λόγο διαδικασίες Poisson) είναι βασισμένα σε ιστορικούς καταλόγους, οι οποίοι παρέχουν σχετική πληροφορία κάποιον εκατοντάδων ετών. Αντίθετα, ο μηχανισμός προσομοίωσης που παρουσιάζεται στη διατριβή βασίζεται στη γεωμετρία των ρηγμάτων, η οποία καλύπτει πληροφορία έως και 15 χιλιάδων ετών στο παρελθόν ώστε να εξαχθούν αξιόπιστα αναλογιστικά μεγέθη. Επιπρόσθετα, πολύγωνα Voronoi ή το επιδημικό μοντέλο ETAS χρησιμοποιούνται για την μοντελοποίηση των ιστορικών καταλόγων αντί τυπικών διαδικασιών Poisson και συνεχείς καμπύλες τρωτότητας αντί διακριτών και πιο αβέβαιων πινάκων πιθανοτήτων ζημίας. Καθώς παρόμοια μοντέλα της αγοράς είναι κατασκευασμένα ώστε να παράγουν τιμολόγηση ανά περιοχές, στην εργασία αυτή έχει επιτευχθεί ακρίβεια ανά συντεταγμένη χρήσιμη για μια ασφαλιστική εταιρεία για πλήρη γνώση του χαρτοφυλακίου κτιρίων της ώστε να μπορεί η ασφαλιστική εταιρεία να αποφύγει ή να διαχειριστεί καταστάσεις αντιεπιλογής. Επιπρόσθετα, οι μεγάλης κλίμακας καταστροφικές αποζημιώσεις του σεισμού καθιστούν τις αντασφαλιστικές εταιρείες ανίκανες να διαχειριστούν μόνες τους τα έξοδα αυτά. Για τον λόγο αυτό, η ασφαλιστική αγορά δημιούργησε τα ομόλογα καταστροφής ώστε να μεταφέρεται ο εν λόγω κίνδυνος στους επενδυτές της αγοράς κεφαλαίων. Στην παρούσα διατριβή, διεξάγεται ο σχεδιασμός και η τιμολόγηση του σχετικού ομολόγου καταστροφής συναρτήσει του προτεινόμενου σεισμικού μοντέλου χρησιμοποιώντας είτε αμιγώς στατιστικές προεξοφλητικές μεθόδους είτε σε συνδυασμό με μηχανικής μάθησης λαμβάνοντας υπόψη και τον πιστωτικό κίνδυνο κάθε εκδότη. Τα ομόλογα αυτά μπορούν να εκδοθούν για την αποτελεσματική αντιμετώπιση των οικονομικών συνεπειών ισχυρών σεισμών.Τεκμήριο Financial analysis of demographic ageing effect on pharmaceutical expenditure of Greece(07-2014) Politi, Anastasia S.; Πολίτη, Αναστασία, Σ.; Athens University of Economics and Business, Department of Statistics; Φράγκος, ΝικόλαοςThe study aims to take a thorough look at the generating process of Greece’s pharmaceutical expenditure volatility taking into consideration latent cost synthesis differentiations among distinct morbidity areas. It uses frequency-severity models that decompose pharmaceutical demand of prescription drugs (Rxs) into a frequency component (claim frequency counts) and a severity component (claim size). It encompasses linear stochastic forms that treat health expenditure as an age-dependent branching process. The models also comprise the therapeutic category of Rxs as a controllable factor of population morbidity.Motivated by official population statistics which signal the impending serious growth of seniors’ portion within the following decades, globally and particularly in Greece, this dissertation presents estimating results of demographic senescence effects on pharmaceutical expenditure in the long run, through the implementation of projections for distinct therapeutic areas.Up to date literature review does not show any frequency – severity analysis conducted for pharmaceutical care data, neither at an international level, nor at the national level where the integrated information systems were developed with delay in relation to European systems. This study focused on this specific methodology and attempted to fill this knowledge gap in theVIdomain of healthcare by producing not only general estimates for the entire study population but also analytical results for sub-populations with distinct morbidity characteristics. As regards the principal aim of this study, namely, the estimation of the impact of aging on pharmaceutical expenditure, it is suggested that this study can bring substantial contribution to this cognitive field, as it includes the assessment of relevant results for sub-populations with distinct morbidity characteristics, which to the knowledge of the author, is a novel approach according to up to date literature data regarding Greece.According to the results, frequency effects play the key role towards severity ones in the generating process of pharmaceutical claim and loss intensity, this norm does not however apply within each therapeutic category. Pharmaceutical spending associated especially with the reimbursement of drugs for the genito-urinary system, the ophthalmological diseases, the antineoplastic and immunomodulating agents and the respiratory system, is more susceptible to the advent of the demographic aging risk.Τεκμήριο The generalized waring process - statistical inference and applications(2021) Zografi, Mimoza S.; Ζωγράφη, Μιμόζα; Athens University of Economics and Business. Department of Statistics; Teugels, Jef; Dimaki, A.; Zazanis, Michael; Zografos, Constantinos; Balakrishnan, Narayanaswamy; Katti, S. K.; Xekalaki, EvdokiaΣ’ αυτήν την διατριβή αναπτύσσουμε μια θεωρία της Γενικευμένης Ανέλιξης Waring που σχετίζεται με μια μεγάλη ποικιλία εφαρμογών. Ειδικότερα, ορίζουμε πρώτα την Γενικευμένη Ανέλιξη Waring στην πραγματική ευθεία ως στατική, αλλά μη ομοιογενή ανέλιξη Markov. Παρέχεται μία εφαρμογή στο πλαίσιο μοντελοποίησης της πρόσβασης στο διαδίκτυο και εφαρμόζεται σε πραγματικά δεδομένα. Στην συνέχεια κατασκευάζουμε την Γενικευμένη Ανέλιξη Waring σε έναν πλήρη διαχωρίσιμο μετρικό χώρο. Η Γενικευμένη Ανέλιξη Waring ορίζεται στον Rd . Αποδεικνύοντας ένα αριθμό ιδιοτήτων της όπως προσθετικότητα, στασιμότητα, εργοδικότητα και διαταξιμότητα, επιδεικνύουμε ότι η νέα ανέλιξη είναι απολύτως ικανοποιητική για στατιστικές εφαρμογές.Τεκμήριο High dimensional time-varying covariance matrices with applications in finance(10-07-2011) Plataniotis, Anastasios; Πλατανιώτης, Αναστάσιος; Athens University of Economics and Business, Department of Statistics; Dellaportas, PetrosThe scope of this Thesis is to provide an original contribution in the area of Multivariate Volatility modeling. Multivariate Volatility modeling in the present context, involves developing models that can adequately describe the Covariance matrix process of Multivariate financial time series. Developmentof efficient algorithms for Bayesian model estimation using Markov Chain Monte Carlo (MCMC) and Nested Laplace approximations is our main objective in order to provide parsimonious and flexible volatility models. A detailed review of Univariate Volatility models for financial time series is first introduced in this Thesis. We illustrate the historical background of each model proposed and discuss its properties and advantages as well as comment on the several estimation methods that have emerged. We also provide a comparative analysis via a small simulation example for the dominant models in the literature. Continuing the review from the univariate models we move on to the multivariate case and extensively present competing models for Covariance matrices. The main argument presented is that currently no model is able to capture the dynamics of higher dimensional Covariance matrices fully, but their relative performance and applicability depends on the dataset and problem of interest. Problems are mainly due to the positive definiteness constraints required by most models as well as lack of interpretability of the model parameters in terms of the characteristics of the financial series. In addition, model development so far focus mostly in parameter estimation and in sample fit; it is our goal to examine the out-of-sample fit perspective of these models. We conclude the review section by proposing some small improvements for existing models that will lead towards more efficient parameter estimates, faster estimation methods and accurate forecasts. Subsequently, a new class of multivariate models for volatility is introduced. The new model is based on the Spectral decomposition of the time changing covariance matrix and the incorporation of autocorrelation modeling or the time changing elements. In these models we allow a priori for all the elements of the covariance matrix to be time changing as independent autoregressive processes and then for any given dataset we update our prior information and decide on the number of time changing elements. Theoretical properties of the new model are presented along with a series of estimation methods, bayesian and classical. We conclude that in order to estimate these models one may use an MCMC method for small dimension portfolios in terms of the size of the covariance matrix. For higher dimensions, due to the curse of dimensionality we propose to use a Nested Laplace approximation approach that provides results much faster with small loss in accuracy. Once the new model is proposed along with the estimation methods, we compare its performance against competing models in simulated and real datasets; we also examine its performance in small portfolios of less than 5 assets as well in the high dimensional case of up to 100 assets. Results indicate that the new model provides significantly better estimates and projections than current models in the majority of example datasets. We believe that small improvements in terms of forecasting is of significant importance in the finance industry. In addition, the new model allows for parameter interpretability and parsimony which is of huge importance due to the dimensionality curse. Simplifying inference and prediction of multivariate volatility models was our initial goal and inspiration. It is our hope that we have made a small step towards that direction, and a new path for studying multivariate financial data series has been revealed. We conclude by providing some proposals for future research that we hope may influence some people into furthering this class of models.
- «
- 1 (current)
- 2
- 3
- »