Moving Average Model with an Alternative GARCH-Type Error

Huafeng ZHU, Xingfa ZHANG, Xin LIANG, Yuan LI

Journal of Systems Science and Information ›› 2018, Vol. 6 ›› Issue (2) : 165-177.

PDF(171 KB)
PDF(171 KB)
Journal of Systems Science and Information ›› 2018, Vol. 6 ›› Issue (2) : 165-177. DOI: 10.21078/JSSI-2018-165-13
 

Moving Average Model with an Alternative GARCH-Type Error

Author information +
History +

Abstract

Motivated by the double autoregressive model with order p (DAR(p) model), in this paper, we study the moving average model with an alternative GARCH error. The model is an extension from DAR(p) model by letting the order p goes to infinity. The quasi maximum likelihood estimator of the parameters in the model is shown to be asymptotically normal, without any strong moment conditions. Simulation results confirm that our estimators perform well. We also apply our model to study a real data set and it has better fitting performance compared to DAR model for the considered data.

Key words

moving average model / double autoregressive model / quasi maximum likelihood estimator

Cite this article

Download Citations
Huafeng ZHU , Xingfa ZHANG , Xin LIANG , Yuan LI. Moving Average Model with an Alternative GARCH-Type Error. Journal of Systems Science and Information, 2018, 6(2): 165-177 https://doi.org/10.21078/JSSI-2018-165-13

1 Introduction

Consider the DAR(p) model with the form
yt=i=1pϕiyti+ηtω+i=1pαiyti2,
(1)
where ω,αi>0, tN{1,2,}, {ηt} is an independent identically distributed sequence with mean 0 and variance 1, and ys is independent of {ηt:t1} for s0. Let Ft be the σ field generated by {ηt,ηt1,,η1,y0,y1,,y1p},tN. Given Ft1, the conditional variance of yt is var(yt|Ft1)=ω+i=1pαiyti2. Compared to the classic autoregressive (AR) model, it is found, by Ling[1, 2], that DAR(p) model has the following novelty: even some roots of 1i=1pϕizi=0 are on or outside the unit circle, and for the case Eyt2=, the quasi maximum likelihood estimator (QMLE) for DAR(p) model is still asymptotically normal, which does not hold for classic AR(p) model with i.i.d. errors. Due to the novelty and simplicity of DAR models, many extensions have been done based on Ling[2]. For example, Ling and Li[3] and Chen, et al.[4] considered the asymptotic inference for nonstationary DAR model; Kwok and Li[5] investigated the diagnostic checking of the DAR model; Liu[6] studied the structure of a DAR process driven by a hidden Markov chain; Cai, et al.[7] proposed the quantile DAR models for financial returns; Zhu and Ling[8] gave asymptotic theory of quasi maximum exponential likelihood estimation for DAR model; Guo, et al.[9] proposed a factor DAR models with application to simultaneous causality testing. Other extensions can be referred to threshold DAR models by Zhang, et al.[10], Li, et al.[11] and Li, et al.[12]; mixture DAR model by Li, et al.[13], and explosive DAR model by Liu, et al.[14].
While, to the best of our knowledge, few work has been done to discuss the question that: how to set p goes to for Model (1)? Undoubtedly, the question is interesting because more information will be used for statistical inference. Directly setting p= in Model (1) is not feasible due to complexity. Fortunately, like ARCH(p) model (Engle[15]) being extended to GARCH model (Bollerslev[16]), the above question can be partially solved in an indirect way. Motivated by the moving average (MA) model and new GARCH specification adopted by Christensen, et al.[17], we consider the following moving average model with an alternative GARCH error
yt=ϕεt1+εt,
(2)
εt=ηtht,
(3)
ht=ω+αyt12+βht1,
(4)
where, |ϕ|<1,ω,α>0,0<β<1, {ηt} is an independent identically distributed sequence with mean 0 and variance 1, and ys is independent of {ηt:t>s}. It is seen that Equations (2)–(4) is simply the MA(1) model with an error term whose conditional variance is specified as an alternative GARCH form given in (4). Different from the traditional MA-GARCH model, the previous εt12 in (4) is instead by yt12. Such kind of GARCH specification of (4) has been adopted by several authors such as Christensen, et al.[17], Zhang, et al.[18] and Zhang, et al.[19]. By simple iterations, (2)–(4) can be transformed into the followed form
yt=i=1(1)i1ϕiyti+ηtω1β+αj=1βj1ytj2.
(5)
Compared to (1), (5) straightforwardly explains why (2)–(4) can be considered as one kind of DAR() model. Let Ft be the σ-field generated by {ηt,ηt1,,η1,y0,y1}. Given Ft1, the conditional variance of yt in (2) or (5) is var(yt|Ft1)=ht.
Due to the use of more information, Model (2)–(4) is expected to have a better fitting performance compared to common DAR model with finite order. Another consideration is whether the QMLE of (2)–(4) has similar asymptotic results compared to the case of DAR(p) model. In this paper, the above two issues will be investigated. The rest of the paper is organized as follows. In Section 2, we discuss the asymptotic normality of the QMLE for the considered model. Simulations and empirical studies are respectively shown in Section 3 and Section 4. We conclude the article in Section 5 and the proofs are put in the Appendix.

2 Quasi Maximum Likelihood Estimation

Define θ=(ϕ,ω,α,β)τΘ. Here Θ is a parameter space for Model (2)–(4) with a form Θ{θ: -1<ϕ_<ϕ<ϕ¯<1,0<ω_<ω<ω¯, 0< α_<α<α¯, 0<β_<β<β¯<1. Suppose that the true parameter, say θ0=(ϕ0,ω0,α0,β0)τ, is an interior point of the considered parameter space. We need to estimate θ based on the observations {yt}t=1n and initial values y0,h0,ε0. Following the convention in the literature, we consider the quasi conditional log-likelihood function (apart from a constant term)
Ln(θ)=1nt=1nlt(θ)=1nt=1n{loght(θ)+εt2(θ)ht(θ)},
(6)
where εt(θ)=ytϕεt1(θ). Based on {yt}t=1n and initial values, the estimator for θ is defined as
θ^n=argminθΘLn(θ).
(7)
For convenience of notations, we put
ς=Eηt41,ht=ht(θ0),εt=εt(θ0)=ytϕ0εt1(θ0),Ht=[11β0,j=1β0j1ytj2,j=1β0j1htj]τ.
Before stating the asymptotic results for θ^n, we firstly make the following assumptions.
(ⅰ) The i.i.d. (0, 1) process {ηt} satisfies E(ηt4)<.
(ⅱ) The series {yt} generated from Model (2)–(4) is strictly stationary and geometrically ergodic.
Theorem 1  For Model (2)(4) and the considered quasi log-likelihood function Ln(θ) given by (6). Suppose Assumptions (ⅰ)–(ⅱ) hold. Then there exists a fixed open neighborhood U(θ0)Θ such that with probability one, as n, Ln(θ) has an unique minimum point θ^n in U(θ0). Furthermore,
n(θ^nθ0)LN(0,ΩI1ΩSΩI1),
where
ΩS=E[4ht(i=1(1)iiϕ0i1yti)200ςht2HtHtτ],
ΩI=E[2ht(i=1(1)iiϕ0i1yti)2001ht2HtHtτ].
Remark 1  The proof of Theorem 1 is based on verifying the conditions of Lemma 1 in Jensen and Rahbek[20]. It can be seen that Eyt2< is not required to guarantee the validity of the theorem and such a result is consistent with that of DAR(p) model in Ling[2]. In practice, initial values h0,ε0 are needed to calculate Ln(θ) and commonly one can set h0=var(yt),ε0=0. The matrices ΩS,ΩI can be approximated by the relevant sample means after the parameters θ=(ϕ,ω,α,β)τ and ht(θ) have been estimated.

3 Simulations

This section examines the performance of the QMLEs through the Monte Carlo experiments. We study the bias and standard deviations of the estimates. The series yt is generated through Model (2)–(4) with ηti.i.d. N(0,1). Noting θ=(ϕ,ω,α,β)τ, the following four cases are considered:
θ=(0.9,0.1,0.3,0.5)τ,θ=(0.2,0.1,0.2,0.7)τ,θ=(0.1,0.8,0.7,0.1)τ,θ=(0.55,0.8,0.7,0.1)τ.
The sample sizes are n=400 and n=800. 1000 replications are used. To run the estimation, we set the initial value for the conditional variance h0=var(yt),ε0=0 and θ[0.99,0.99]×[0.001,1.5]×[0.001,0.99]×[0.001,0.99].
Table 1 summarizes the empirical biases, empirical standard deviations (SD) and asymptotic standard deviations (AD) of the QMLEs of θ=(ϕ,ω,α,β)τ. The ADs are calculated according to Theorem 1 and its remark. Table 1 shows that all the biases of the QMLEs are very small. It can be seen that the SDs and ADs are very close. As the sample size n is increased from 400 to 800, all SDs and ADs become smaller. The simulation results indicate that the QMLE performs well.
Table 1 Bias and standard deviations of QMLEs
θ=(ϕ,ω,α,β)τ ϕ^ ω^ α^ β^
(0.9,0.1,0.3,0.5)τ Bias 0.0047 0.0148 0.0038 0.0157
(n=400) SD 0.0229 0.0717 0.0614 0.0712
AD 0.0179 0.0259 0.0481 0.0515
(0.9,0.1,0.3,0.5)τ Bias 0.0027 0.0077 0.0034 0.0089
(n=800) SD 0.0155 0.0535 0.0462 0.0498
AD 0.0125 0.0213 0.0388 0.0412
(0.2,0.1,0.2,0.7)τ Bias 0.0013 0.0243 0.0016 0.0286
(n=400) SD 0.0537 0.0739 0.0595 0.1082
AD 0.0548 0.0384 0.0556 0.0680
(0.2,0.1,0.2,0.7)τ Bias 0.0011 0.0101 0.0006 0.0131
(n=800) SD 0.0384 0.0354 0.0390 0.0591
AD 0.0386 0.0321 0.0395 0.0629
(0.1,0.8,0.7,0.1)τ Bias 0.0009 0.0223 0.0119 0.0076
(n=400) SD 0.0678 0.1475 0.1108 0.0633
AD 0.0617 0.1239 0.1071 0.0495
(0.1,0.8,0.7,0.1)τ Bias 0.0019 0.0105 0.0078 0.0020
(n=800) SD 0.0455 0.1078 0.0850 0.0491
AD 0.0462 0.0840 0.0820 0.0435
(0.55,0.8,0.7,0.1)τ Bias 0.0018 0.0217 0.0052 0.0049
(n=400) SD 0.0385 0.1500 0.1055 0.0539
AD 0.0359 0.1345 0.0908 0.0400
(0.55,0.8,0.7,0.1)τ Bias 0.0005 0.0048 0.0032 0.0010
(n=800) SD 0.0267 0.1020 0.0724 0.0363
AD 0.0245 0.0897 0.0693 0.0204
†Number of replications=1000.

4 Empirical Studies

In this section, Model (2)–(4) is applied to study a real data set. We analyze the US monthly interest rate series over the period July 1972–August 2001. The series is the 90-day treasury bill rate and has 350 observations in total. Such a data set has also been analyzed by Ling[1]. Take xt to be the logarithms of the 90-day treasury bill rate and yt=xtxt1. We plot xt and yt in Figs 1 and 2 respectively.
Figure 1 Time plot of xt (logarithms of the US 90-day treasury bill rate)

Full size|PPT slide

Figure 2 Time plot of yt (yt=xtxt1)

Full size|PPT slide

DAR(1) model has been adopted to fit yt by Ling[1] and the result is
yt=0.3810(0.0777)yt1+εt,εt=ηt{0.0023(0.0002)+0.6209(0.0908)yt12},
(8)
where the values in parentheses are the corresponding standard deviations.
Similarly, we also use Model (2)–(4) to fit series {yt} and the result is
yt=0.4065(0.0575)εt1+εt,εt=ηtht,ht=0.0010(0.0003)+0.4339(0.1246)yt12+0.2823(0.1291)ht1,
(9)
where the standard deviations in parentheses are calculated based on Theorem 1.
To compare, for Models (8)–(9), root of mean squared error (RMSE) for one-step-ahead forecast and value of the log-likelihood are respectively listed in Table 2. Further, Figure 3 shows time plots of data xt with 95% confidence intervals for the one-step-ahead forecast based on (8)–(9), starting from February 1989. It is shown that the confidence intervals based on Model (9) are narrower than those based on (8) at nearly all the data points. From Table 2 and Figure 3, Model (9) is superior to the DAR(1) model.
Table 2 Fitting performance for Models (8)–(9)
Model RMSE for one-step-ahead forecast Log-likelihood value
(8) 0.0590 764.4363
(9) 0.0583 780.5071
Figure 3 Plot of the data xt (real line), 95% forecasting confidence intervals based on Model (8) (circle) and (9) (triangle) respectively

Full size|PPT slide

5 Conclusions

With the motivation to study the DAR(p) model with p going to , in this article, we propose a special moving average model with an alternative GARCH-type error. Asymptotic normality for QMLE of the model is established under weak conditions. Through the simulations, it is found that the estimation performs well. Empirical study shows the considered model can have better fitting performance compared to the previous DAR model. This implies our model framework can be used for potential applications where a DAR model structure is applicable.

Appendix

Lemma 1  (Lemma 1 of Jensen and Rahbek[20]) Denote Ln(θ) as a function of the observations y1,y2,,yn and the parameter θΘRk. Suppose θ0 is an interior point of Θ. Assume Ln(.):RkR is three times continuously differentiable in θ and
(ⅰ) As n, nLn(θ0)θLN(0,ΩS),ΩS>0;
(ⅱ) As n, 2Ln(θ0)θθτPΩI>0;
(ⅲ) maxi,j,k=1,2,,p+2supθN(θ0)|3Ln(θ0)θiθjθk|cn,0cnPc<.
Here, N(θ0) is a neighborhood of θ0. Then there exists a fixed open neighborhood U(θ0)N(θ0) such that:
(ⅰ) As n, with probability one that there exists a minimum point θ^n of Ln(θ) in U(θ0) and Ln(θ) is convex in U(θ0). Moreover, θ^n is unique and solves Ln(θ^n)θ=0;
(ⅱ) As n, θ^nθ0P0, n(θ^nθ0)LN(0,ΩI1ΩSΩI1).
Before giving proof for Theorem 1, we need to state some expressions and one lemma. Let symbol variables s1,s2,s3 take values from symbol set {i,j,k}. In terms of (6), it is not difficult to get the derivatives of the quasi log-likelihood function with respect to θ:
lt(θ)θ=(1εt2(θ)ht(θ))1ht(θ)ht(θ)θ+2εt(θ)ht(θ)εt(θ)θ,
(A.1)
2lt(θ)θθτ=1ht2(θ)(12εt2(θ)ht(θ))ht(θ)θht(θ)θτ2εt(θ)ht2(θ)ht(θ)θεt(θ)θτ+2ht(θ)εt(θ)θεt(θ)θτ+2εt(θ)ht(θ)2εt(θ)θθτ2εt(θ)ht2(θ)εt(θ)θht(θ)θτ+1ht(θ)(1εt2(θ)ht(θ))2ht(θ)θθτ,
(A.2)
3lt(θ)θiθjθk=d1t(θ)+d2t(θ)+d3t(θ),
(A.3)
where,
d1t(θ)=2(ht(θ)3εt2(θ))ht4(θ)ht(θ)θiht(θ)θjht(θ)θk+2εt(θ)ht3(θ)s1s2s3ht(θ)θs1ht(θ)θs2εt(θ)θs31ht2(θ)s1s2s3ht(θ)θs1εt(θ)θs2εt(θ)θs3,
(A.4)
d2t(θ)=1ht2(θ)(εt2(θ)ht(θ)1)s1s2s3ht(θ)θi2ht(θ)θs2θs3εt(θ)ht2(θ)s1s2s3ht(θ)θs12εt(θ)θs2θs3εt(θ)ht2(θ)s1s2s3εt(θ)θs12ht(θ)θs2θs3+1ht(θ)s1s2s3εt(θ)θs12εt(θ)θs2θs3,
(A.5)
d3t(θ)=1ht(θ)(1εt2(θ)ht(θ))3ht(θ)θiθjθk+2εt(θ)ht(θ)3εt(θ)θiθjθk.
(A.6)
Simple calculations gives
εt(θ)=ytϕεt1(θ)=i=0(1)iϕiyti,
(A.7)
εt(θ)ω=εt(θ)α=εt(θ)β=0,εt(θ)ϕ=i=1(1)iiϕi1yti,
(A.8)
2εt(θ)ϕ2=i=2(1)ii(i1)ϕi2yti,
(A.9)
3εt(θ)ϕ3=i=3(1)ii(i1)(i2)ϕi3yti.
(A.10)
Similarly, one can get
ht(θ)=ω1β+αj=0βjytj12,
(A.11)
ht(θ)ϕ=0,ht(θ)ω=11β,ht(θ)α=j=0βjytj12,
(A.12)
ht(θ)β=j=1βj1htj(θ)=ω(1β)2+αj=1jβj1ytj12,
(A.13)
2ht(θ)ωβ=2ht(θ)βω=1(1β)2,2ht(θ)αβ=2ht(θ)βα=j=1jβj1ytj12,
(A.14)
2ht(θ)β2=2ω(1β)3+αj=2j(j1)βj2ytj12,
(A.15)
3ht(θ)ωβ2=2(1β)3,3ht(θ)αβ2=j=2j(j1)βj2ytj12,
(A.16)
3ht(θ)β2α=3ht(θ)βαβ=j=2j(j1)βj2ytj12,
(A.17)
3ht(θ)β3=6ω(1β)4+αj=3j(j1)(j2)βj3ytj12.
(A.18)
Lemma 2  Let 1i,j,k4. Under Assumptions (ⅰ)–(ⅱ) of Theorem 1, with appropriate choice of θΘ, there exist positive stationary series {wlt},l=1,2,,7 (independent of θ) such that E(wlt)q< for q1 and
supθΘ|εt(θ)ht(θ)|w1t,supθΘ|1ht(θ)εt(θ)θi|w2t,supθΘ|1ht(θ)ht(θ)θ|w3t,supθΘ|1ht(θ)2εt(θ)θiθj|w4t,supθΘ|1ht(θ)3εt(θ)θiθjθk|w5t,supθΘ|1ht(θ)2ht(θ)θiθj|w6t,supθΘ|1ht(θ)3ht(θ)θiθjθk|w7t.
(A.19)
Proof. For the above inequalities, they can be shown by analogous argument and hence we only give the detail for the last one with the case of θi=θj=θk=β, namely we are to show
supθΘ|1ht(θ)3ht(θ)β3|w7t.
(A.20)
From (A.18),
|1ht(θ)3ht(θ)β3|=|6ω(1β)4+αi=3i(i1)(i2)βi3yti12ω1β+αj=0βjytj12|=6ω(1β)4ω1β+αj=0βjytj12+αi=3i(i1)(i2)βi3yti12ω1β+αj=0βjytj12I1t+I2t.
It is easy to obtain
I1t=6ω(1β)4ω1β+αj=0βjytj126ω(1β)4ω1β6(1β¯)3.
Hence, we only need to show I4t is bounded by some positive series. By definition,
I2t=1β3i=3i(i1)(i2)αβiyti12ω1β+αj=0βjytj121β3i=3i(i1)(i2)αβiyti12ω1β+αβiyti12=1β3i=3i(i1)(i2)αβiyti12/ω1β1+αβiyti12/ω1β.
(A.21)
For any s(0,1), when x>0, it is not difficult to show
x1+x<xs.
(A.22)
Set x=αβiyti12/ω1β. Based on (A.21) and (A.22), for certain s(0,1),
I2t1β3i=3i(i1)(i2)(αβiyti12/ω1β)s=1β3i=3[α(1β)ω]si(i1)(i2)βsi|yti1|2s=1β3i=3[α(1β)ω]sβ3si(i1)(i2)βs(i3)|yti1|2sβ¯3sβ_[α¯(1β_)ω_]si=3i(i1)(i2)βs(i3)|yti1|2s.
(A.23)
Let ρ=β¯s and C is a finite constant such that
β¯3sβ_[α¯(1β_)ω_]sC.
Following (A.23), it comes that
I2tCi=3i(i1)(i2)ρi3|yti1|2sw7t.
For any q>1, by Minkowski inequality,
[E(|w7t|)q]1q=[E(|Ci=3i(i1)(i2)ρi3|yti1|2s|)q]1qCi=3i(i1)(i2)ρi3(E|yti1|2sq)1q.
No matter how large q is, we always can find a small s(0,1) such that 2sq is small enough to guarantee E|yti1|2sq< (e.g., 2sq<2). Then we can obtain
[E(w7t)q]1qC(E|yti1|2sq)1qi=3i(i1)(i2)ρi3<,
and hence
E|w7t|q<,
which ends the proof of Lemma 2.
Proof of Theorem 1  According to Lemma 1, we just need to show the conditions (ⅰ)–(ⅲ) hold. We consider condition (ⅰ) first. Recall ht=ht(θ0),εt=εt(θ0),ς=Eηt41. Further, put
htθ=ht(θ0)θ,εtθ=εt(θ0)θ.
From (A.1),
nLn(θ0)θ=1nt=1n[(1εt2ht)1hthtθ+2εthtεtθ]1nt=1nSt.
Consider any non zero vector c=(c1,c2,c3,c4)τ, then we have
ncτLn(θ0)θ=t=1n(1ncτSt)t=1nWt.
Let Ft1=σ{ηt1,ηt2,,η1,y0,y1} be the information set up to time t1, then we know {Wt}t=1n is a martingale difference with respect to Ft1 and
E(Wt2|Ft1)=E(1ncτStStτc|Ft1)=1ncτE[StStτ|Ft1]c.
Under Assumptions (ⅰ)–(ⅱ) of Theorem 1, it is not difficult to get
E[StStτ|Ft1]=ςht2htθhtθτ+4htεtθεtθτΩS,t.
Hence,
t=1nE(Wt2|Ft1)=t=1n1ncτE(StStτ|Ft1)c=cτ(1nt=1nΩS,t)cPcτΩSc,
where
ΩS=E(ΩS,t)=E(ςht2htθhtθτ+4htεtθεtθτ).
(A.24)
Given any δ>0, we have
t=1nE(Wt2I{|Wt|δ})=1nt=1nE(cτStStτcI{|cτStStτc|nδ2})=E(cτS1S1τcI{|cτS1S1τc|nδ2}0 (n).
The above limit can be explained by the fact E(ΩS,t)< which can be easily shown based on Lemma 2 and Hölder's inequality. By the martingale central limit theorem, see, for example, Theorem 35.12 in Billingsley[21], it follows that
t=1nWtLN(0,cτΩSc),  and hence,   nLn(θ0)θLN(0,ΩS).
(A.25)
For condition (ⅱ) of Lemma 1, by the double expectation formula, it is not difficult to obtain
E[2lt(θ0)θθτ]=E(1ht2htθhtθτ+2htεtθεtθτ)ΩI,2Lt(θ0)θθτ=1nt=1n[2lt(θ0)θθτ]PΩI.
For condition (ⅲ) of Lemma 1, it can be shown based on the results of Lemma 2. From (A.3), it suffices to show |dit(θ)|,i=1,2,3 are respectively bounded by certain stationary positive sequence with finite expectation. We only consider |d1t(θ)| and other cases can be similarly proved. From (A.4), rewrite d1t(θ) in following form
d1t(θ)=2(13εt2(θ)ht(θ))1ht(θ)ht(θ)θi1ht(θ)ht(θ)θj1ht(θ)ht(θ)θk+2εt(θ)ht(θ)s1s2s31ht(θ)ht(θ)θs11ht(θ)ht(θ)θs21ht(θ)εt(θ)θs3s1s2s31ht(θ)ht(θ)θs11ht(θ)εt(θ)θs21ht(θ)εt(θ)θs3.
(A.26)
Based on Lemma 2, there exists some positive constant D such that
supθΘ|d1t(θ)|D[(1+w1t2)w3t2+w1tw2tw3t2+w3tw2t2]ct.
(A.27)
By twice applications of Hölder's inequality for the second term in the above bracket in (A.27), based on Lemma 2,
[E(w1tw2tw3t2)]2E[(w1tw2t)2]E[(w3t)2]<,
and hence E(w1tw2tw3t2)<. Similarly, one can show expectations exist for other two terms in the bracket of (A.27), which implies
E(ct)=c<.
(A.28)
In conjunction with (A.26), (A.27) and (A.28), it follows that
supθΘ|1nt=1nd1t(θ)|1nt=1nctcntPc,
(A.29)
which ends the proof of Theorem 1.

References

1
Ling S Q. Estimation and testing stationarity for double-autoregressive models. Journal of the Royal Statistical Society: Series B, 2004, 66 (1): 63- 78.
2
Ling S Q. A double AR (p) model: Structure and estimation. Statistica Sinica, 2007, 17 (1): 161- 175.
3
Ling S Q, Li D. Asymptotic inference for a nonstationary double AR (1) model. Biometrika, 2008, 95 (1): 257- 263.
4
Chen M, Li D, Ling S Q. Non-stationarity and quasi-maximum likelihood estimation on a double autoregressive model. Journal of Time Series Analysis, 2014, 35 (3): 189- 202.
5
Kwok S S M, Li W K. A note on diagnostic checking of the double autoregressive model. Journal of Statistical Computation & Simulation, 2009, 79 (5): 705- 715.
6
Liu J C. Structure of a double autoregressive process driven by a hidden Markov chain. Statistics & Probability Letters, 2012, 82 (82): 1468- 1473.
7
Cai Y Z, Montes-Rojas G, Olmo J. Quantile double AR time series models for financial returns. Journal of Forecasting, 2013, 32 (6): 551- 560.
8
Zhu K, Ling S Q. Quasi-maximum exponential likelihood estimators for a double AR (p) model. Statistica Sinica, 2013, 23 (1): 251- 270.
9
Guo S J, Ling S Q, Zhu K. Factor double autoregressive models with application to simultaneous causality testing. Journal of Statistical Planning & Inference, 2014, 148 (148): 82- 94.
10
Zhang X F, Wong H, Li Y, et al. A class of threshold autoregressive conditional heteroscedastic models. Statistics & Its Interface, 2011, 2 (2): 149- 157.
11
Li D, Ling S Q, Zakoïan J M. Asymptotic inference in multiple-threshold double autoregressive models. Journal of Econometrics, 2015, 189 (2): 415- 427.
12
Li D, Ling S Q, Zhang R. On a threshold double autoregressive model. Journal of Business & Economic Statistics, 2016, 34 (1): 68- 80.
13
Li G D, Zhu Q, Liu Z, et al. On mixture double autoregressive time series models. Journal of Business & Economic Statistics, 2015(in press).
14
Liu F, Li D, Kang X. Sample path properties of an explosive double autoregressive model. Econometric Reviews, 2016(in press).
15
Engle R F. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 1982, 50 (4): 987- 1007.
16
Bollerslev T. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics, 1986, 31 (3): 307- 327.
17
Christensen B J, Dahl C M, Iglesias E M. Semiparametric inference in a GARCH-in-mean model. Journal of Econometrics, 2012, 167 (2): 458- 472.
18
Zhang X F, Wong H, Li Y, et al. An alternative GARCH-in-mean model: Structure and estimation. Communication in Statistics-Theory and Methods, 2013, 42 (42): 1821- 1838.
19
Zhang X F, Wong H, Li Y. A functional coefficient GARCH-M model. Communication in Statistics-Theory and Methods, 2016, 45 (13): 3807- 3821.
20
Jensen S T, Rahbek A. Asymptotic inference for non-stationary GARCH. Econometric Theory, 2004, 20 (6): 1203- 1226.
21
Billingsley P. Probability and Measure. 3rd Edition, Wiley, New York, NY, USA, 1995.

Acknowledgements

The authors gratefully acknowledge the Editor and referees for their insightful comments and helpful suggestions that led to a marked improvement of the article.

Funding

National Natural Science Foundation of China(11401123)
National Natural Science Foundation of China(11571148)
Key Project of National Natural Science Foundation of China(11731015)
PDF(171 KB)

221

Accesses

0

Citation

Detail

Sections
Recommended

/