(76 intermediate revisions by 4 users not shown)
Line 4: Line 4:
 
==Performance of Stochastic Restricted and Unrestricted Two-Parameter Estimators in linear Mixed Models==
 
==Performance of Stochastic Restricted and Unrestricted Two-Parameter Estimators in linear Mixed Models==
  
'''Nahid Ganjealivand<sup>*1</sup>, Fatemeh Ghapani<sup>2</sup>,'''<big>''' '''</big>'''Ali Zaherzadeh<sup>3</sup>, Farshin Hormozinejad<sup>1</sup>'''
+
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
 
 +
'''AsupNahid Ganjealivand<sup>1</sup>, Fatemeh Ghapani<sup>*2</sup>,'''<big>''' '''</big>'''AsupZaherzadeh<sup>3</sup>, Farshin Hormozinejad<sup>1</sup>'''
  
 
<sup>1</sup> Department of Statistics, Ahvaz Branch, Islamic Azad University, Ahvaz, Iran
 
<sup>1</sup> Department of Statistics, Ahvaz Branch, Islamic Azad University, Ahvaz, Iran
  
ganjealivandsci[mailto:@ @]gmail.com
 
  
 
<sup>2</sup> Department of Mathematics and Statistics, Shoushtar Branch, Islamic Azad University, Shoushtar, Iran
 
<sup>2</sup> Department of Mathematics and Statistics, Shoushtar Branch, Islamic Azad University, Shoushtar, Iran
  
∗Corresponding author. Email address:f.ghapani[mailto:sci@gmail.com sci@gmail.com], [mailto:f-ghapani@phdstu.ac.ir f-ghapani@phdstu.ac.ir]
+
∗Corresponding author. Email address:f.ghapani[mailto:f.ghapani@iau-shoushtar.ac.ir], [mailto:f-ghapani@phdstu.ac.ir f-ghapani@phdstu.ac.ir]
  
 
<sup>3</sup>Jundi-Shapur University of Technology,Dezful,Iran
 
<sup>3</sup>Jundi-Shapur University of Technology,Dezful,Iran
 
-->
 
-->
 
 
==Abstract==
 
==Abstract==
  
 
In this article, two parameter estimation using penalized likelihood method in the linear mixed model is proposed. In addition, by considering the stochastic linear restriction for the vector of fixed effects parameters we are introduced the stochastic restricted two parameter estimation. Methods are proposed for estimating variance parameters when unknown. Also, the superiority conditions of the two parameter estimator over the best linear unbiased estimator, and the stochastic restricted two parameter estimator over the stochastic restricted best linear unbiased estimator are obtained under the mean square error matrix sense. Methods are proposed for estimating of the biasing parameters. Finally, a simulation study and a numerical example are given to evaluate the proposed estimators.
 
In this article, two parameter estimation using penalized likelihood method in the linear mixed model is proposed. In addition, by considering the stochastic linear restriction for the vector of fixed effects parameters we are introduced the stochastic restricted two parameter estimation. Methods are proposed for estimating variance parameters when unknown. Also, the superiority conditions of the two parameter estimator over the best linear unbiased estimator, and the stochastic restricted two parameter estimator over the stochastic restricted best linear unbiased estimator are obtained under the mean square error matrix sense. Methods are proposed for estimating of the biasing parameters. Finally, a simulation study and a numerical example are given to evaluate the proposed estimators.
  
'''Keywords''' Linear mixed model; Two parameter estimation; Stochastic restricted two parameter estimation; Matrix mean square error.
+
'''Keywords''': Linear mixed model, two parameter estimation, stochastic restricted two parameter estimation, matrix mean square error
 +
<!--'''Mathematics Subject''' '''Classification'''  62J05. 62J12.-->
  
'''Mathematics Subject''' '''Classification'''  62J05. 62J12.
+
==1. Introduction==
  
==1.Introduction==
+
Today many datasets lack the assumption of data independence, which is the main presupposition of many statistical models. For example data collected by cluster or hierarchical sampling, lengthwise studies and frequent measurements or in medical research that simultaneously provides data from one or more body members, the assumption of data independence is unacceptable because the data of a cluster, a group, or an individual are interdependent over time [1]. The default requirement for fitting linear models is the assumption of data independence that does not exist so the use of these models although it leads to unbiased estimates but the variance of estimating coefficients is strongly influenced by the default of data independence. In other words if the data are not independent then the standard error and therefore the confidence interval and the result test result will be for non-trust regression coefficients. Therefore in analyzing these data it is necessary to use methods that can consider this dependence. One of the most important ways to solve this problem is linear mixed models which are generalizations of simple linear models that provide the possibility of random and fixed effects with each other. Linear mixed models are used in many fields of physical, biological, medical and social sciences [2-5].
 
+
<span id='OLE_LINK1'></span><span id='OLE_LINK2'></span>Today many datasets lack the assumption of data independence, which is the main presupposition of many statistical models. For example data collected by cluster or hierarchical sampling, lengthwise studies and frequent measurements or in medical research that simultaneously provides data from one or more body members, the assumption of data independence is unacceptable because the data of a cluster, a group, or an individual are interdependent over time [1]. The default requirement for fitting linear models is the assumption of data independence that does not exist so the use of these models although it leads to unbiased estimates but the variance of estimating coefficients is strongly influenced by the default of data independence. In other words if the data are not independent then the standard error and therefore the confidence interval and the result test result will be for non-trust regression coefficients. Therefore in analyzing these data it is necessary to use methods that can consider this dependence. One of the most important ways to solve this problem is linear mixed models which are generalizations of simple linear models that provide the possibility of random and fixed effects with each other. Linear mixed models are used in many fields of physical, biological, medical and social sciences [2-5].
+
  
 
We consider the linear mixed model (LMM) as follows:
 
We consider the linear mixed model (LMM) as follows:
Line 36: Line 45:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">y=X\beta +Zu+\epsilon</math>
+
| style="text-align: center;" |<math display="inline">y=X\beta +Zu+\epsilon</math>
 +
|-
 +
| style="text-align: center;" |<math display="inline">q=\displaystyle\sum _{i=1}^{b}{q}_{i}</math>,
 
|}
 
|}
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (1)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (1)
 
|}
 
|}
  
 +
where <math display="inline">y</math> is an <math display="inline">n\times 1</math> vector of observations, <math display="inline">\mathit{\boldsymbol{Z}}=</math><math>\left[ {\mathit{\boldsymbol{Z}}}_{1},\ldots ,{\mathit{\boldsymbol{Z}}}_{b}\right]</math>  with <math display="inline">{\mathit{\boldsymbol{Z}}}_{i}</math> is an <math display="inline">n\times {q}_{i}</math> design matrix corresponding to the <math display="inline">i</math>-th random effects factor and <math display="inline">q</math>, <math display="inline">{q}_{i},\, X</math> is an <math display="inline">n\times p</math> observed design matrix for the fixed effects, <math display="inline">\beta</math>  is a <math display="inline">p\times 1</math> parameter vector of unknown fixed effects, <math display="inline">\mathit{\boldsymbol{u}}={\left[ {\mathit{\boldsymbol{u}}}_{1}^{'},\ldots ,{\mathit{\boldsymbol{u}}}_{b}^{'}\right] }^{'}</math> is a <math display="inline">q\times 1</math> unobservable vector of random effects and <math display="inline">\mathit{\boldsymbol{\epsilon }}</math> is an <math display="inline">n\times 1</math> unobservable vector of random errors. <math display="inline">u</math> and <math display="inline">\epsilon</math>  are independent and have a multivariate normal distribution as
  
<math display="inline">q=\sum _{i=1}^{b}{q}_{i}</math> ,
 
 
where <math display="inline">y</math> is an <math display="inline">n\times 1</math> vector of observations, <math display="inline">\mathit{\boldsymbol{Z}}=</math><math>\left[ {\mathit{\boldsymbol{Z}}}_{1},\ldots ,{\mathit{\boldsymbol{Z}}}_{b}\right]</math>  with <math display="inline">{\mathit{\boldsymbol{Z}}}_{i}</math> is an <math display="inline">n\times {q}_{i}</math> design matrix corresponding to the <math display="inline">i</math> -th random effects factor and q, <math display="inline">{q}_{i},\, X</math>  is an <math display="inline">n\times p</math> observed design matrix for the fixed effects, <math display="inline">\beta</math>  is a <math display="inline">p\times 1</math> parameter vector of unknown fixed effects,
 
 
<math display="inline">\mathit{\boldsymbol{u}}={\left[ {\mathit{\boldsymbol{u}}}_{1}^{'},\ldots ,{\mathit{\boldsymbol{u}}}_{b}^{'}\right] }^{'}</math> is a <math display="inline">q\times 1</math> unobservable vector of random effects and <math display="inline">\mathit{\boldsymbol{\epsilon }}</math> is an <math display="inline">n\times 1</math> unobservable<br/>vector of random errors.<br/> <math display="inline">u</math> and <math display="inline">\epsilon</math>  are independent and have a multivariate normal distribution as
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
Line 53: Line 60:
  
  
where <math display="inline">\zeta</math>  and <math display="inline">\xi</math>  are <math display="inline">{r}_{1}\times 1</math> and <math display="inline">{r}_{2}\times 1</math> vectors of variance parameters corresponding to <math display="inline">u</math> and <math display="inline">\epsilon</math> , respectively. Henderson et al [6-7] introduced the set of equations called mixed model equations, and obtained <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}</math> as
+
where <math display="inline">\zeta</math>  and <math display="inline">\xi</math>  are <math display="inline">{r}_{1}\times 1</math> and <math display="inline">{r}_{2}\times 1</math> vectors of variance parameters corresponding to <math display="inline">u</math> and <math display="inline">\epsilon</math>, respectively. Henderson et al. [6-7] introduced the set of equations called mixed model equations, and obtained <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}</math> as
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
Line 60: Line 67:
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\left. \tilde{\mathit{\boldsymbol{u}}}=\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}(\mathit{\boldsymbol{y}}-\right. </math><math>\left. \mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}})\right) <br/></math>
+
| <math>\left. \tilde{\mathit{\boldsymbol{u}}}=\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}(\mathit{\boldsymbol{y}}- \mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}})\right)</math>
 
|}
 
|}
where <math display="inline">\mathrm{Var}\,(\mathit{\boldsymbol{y}})={\sigma }^{2}\mathit{\boldsymbol{H}}</math> and <math display="inline">\mathit{\boldsymbol{H}}=</math><math>\mathit{\boldsymbol{ZG}}{\mathit{\boldsymbol{Z}}}^{'}+\mathit{\boldsymbol{W}}</math>. They <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}</math> are called the best linear unbiased<br/>estimator (BLUE) and the best linear unbiased predictor (BLUP), respectively. One of the most common estimators in linear regression is the ordinary least squares (OLS) estimator, which in the case of multicollinearity may lead to estimates with adverse effects such as high variance [8]. To reduce the effects of multicollinearity. Hoerl et al (in [9-10]) proposed the ridge estimator and the Liu estimator respectively, which are the well-known alternatives of the OLS estimator. Yang and Chang [11] obtained the two parameter estimator <math display="inline">\overset{\mbox{ˆ}}{\mathit{\boldsymbol{\beta }}}(k,d),</math> “Using the mixed estimation technique introduced by Theil et al (see [12-13]). They considered the prior information about <math display="inline">\mathit{\boldsymbol{\beta }}</math> in the form of restriction as <math display="inline">(d-</math><math>k)\overset{\mbox{ˆ}}{\mathit{\boldsymbol{\beta }}}(k)=\mathit{\boldsymbol{\beta }}+</math><math>{\mathit{\boldsymbol{\epsilon }}}_{0},</math> where <math display="inline">k,d</math> and <math display="inline">\overset{\mbox{ˆ}}{\mathit{\boldsymbol{\beta }}}(k)</math><br/>are respectively the ridge, Liu parameters and the ridge estimator”.
 
  
In <math display="inline">LMM</math>, authors such as Gilmour, Jiang and Searl in <math display="inline">[14-</math><math>16],</math> considered a state where the matrix <math display="inline">{X}^{'}{H}^{-1}X</math> is singular. Eliot and Liu and <math display="inline">Hu[10-</math><math>17]</math> inquired the ridge prediction in LMM. Liu and <math display="inline">Hu</math> in [10] are obtained <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k)</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}(k)</math> as
+
where <math display="inline">\mathrm{Var}\,(\mathit{\boldsymbol{y}})={\sigma }^{2}\mathit{\boldsymbol{H}}</math> and <math display="inline">\mathit{\boldsymbol{H}}=</math><math>\mathit{\boldsymbol{ZG}}{\mathit{\boldsymbol{Z}}}^{'}+\mathit{\boldsymbol{W}}</math>. They <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}</math> are called the best linear unbiased estimator (BLUE) and the best linear unbiased predictor (BLUP), respectively. One of the most common estimators in linear regression is the ordinary least squares (OLS) estimator, which in the case of multicollinearity may lead to estimates with adverse effects such as high variance [8]. To reduce the effects of multicollinearity. Liu et al. [9-10] proposed the ridge estimator and the Liu estimator respectively, which are the well-known alternatives of the OLS estimator. Yang and Chang [11] obtained the two parameter estimator <math display="inline">\overset{\mbox{ˆ}}{\mathit{\boldsymbol{\beta }}}(k,d),</math> “Using the mixed estimation technique introduced by Theil et al. [12-13]. They considered the prior information about <math display="inline">\mathit{\boldsymbol{\beta }}</math> in the form of restriction as <math display="inline">(d-</math><math>k)\overset{\mbox{ˆ}}{\mathit{\boldsymbol{\beta }}}(k)=\mathit{\boldsymbol{\beta }}+</math><math>{\mathit{\boldsymbol{\epsilon }}}_{0},</math> where <math display="inline">k,d</math> and <math display="inline">\overset{\mbox{ˆ}}{\mathit{\boldsymbol{\beta }}}(k)</math> are respectively the ridge, Liu parameters and the ridge estimator”.
 +
 
 +
In <math display="inline">LMM</math>, authors such as Gilmour et al. [14], Jiming and Lahiri [15] and Patel and Patel [16], considered a state where the matrix <math display="inline">{X}^{'}{H}^{-1}X</math> is singular. Liu and Hu [10] and Eliot et al. [17] inquired the ridge prediction in LMM. Liu and Hu [10] are obtained <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k)</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}(k)</math> as
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 72: Line 80:
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\tilde{u}(k)=G{Z}^{'}{H}^{-1}(y-X\tilde{\beta }(k))<br/></math>
+
| <math>\tilde{u}(k)=G{Z}^{'}{H}^{-1}(y-X\tilde{\beta }(k))</math>
 
|}
 
|}
where <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k)</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}(k)</math> are the ridge estimator of <math display="inline">\mathit{\boldsymbol{\beta }}</math> and the ridge predictor of <math display="inline">\mathit{\boldsymbol{u}},</math> respectively. Qzkale and Can [18] gave “an example from kidney failure data” to evaluate ridge estimator in linear mixed model. In [19], Kuran and Ozkale obtained the mixed and stochastic restricted ridge predictors by using Gilmour approach. They introduced “stochastic linear restriction as <math display="inline">r=</math><math>R\beta +\Phi</math> <br/>where <math display="inline">r</math> is an <math display="inline">m\times 1</math> vector, <math display="inline">\mathit{\boldsymbol{R}}</math> is an <math display="inline">m\times p</math> known matrix of rank <math display="inline">m\leq p</math> and <math display="inline">\Phi</math>  is an <math display="inline">m\times 1</math> random vector that is assumed to be distributed with <math display="inline">E(\Phi )=</math><math>0</math> and <math display="inline">\mathrm{Var}\,(\Phi )={\sigma }^{2}\mathit{\boldsymbol{V}}(\mathit{\boldsymbol{v}}),</math> where <math display="inline">v</math> is <math display="inline">v\times 1</math> vector of variance parameters corresponding to <math display="inline">\Phi .</math> Also <math display="inline">\Phi</math>  and <math display="inline">\epsilon</math>  are independent”
+
 
 +
where <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k)</math> and <math display="inline">\tilde{\mathit{\boldsymbol{u}}}(k)</math> are the ridge estimator of <math display="inline">\mathit{\boldsymbol{\beta }}</math> and the ridge predictor of <math display="inline">\mathit{\boldsymbol{u}},</math> respectively. Özkale and Can [18] gave “an example from kidney failure data” to evaluate ridge estimator in linear mixed model. Kuran and Özkale [19] obtained the mixed and stochastic restricted ridge predictors by using Gilmour approach. They introduced “stochastic linear restriction as <math display="inline">r=</math><math>R\beta +\Phi</math> where <math display="inline">r</math> is an <math display="inline">m\times 1</math> vector, <math display="inline">\mathit{\boldsymbol{R}}</math> is an <math display="inline">m\times p</math> known matrix of rank <math display="inline">m\leq p</math> and <math display="inline">\Phi</math>  is an <math display="inline">m\times 1</math> random vector that is assumed to be distributed with <math display="inline">E(\Phi )=</math><math>0</math> and <math display="inline">\mathrm{Var}\,(\Phi )={\sigma }^{2}\mathit{\boldsymbol{V}}(\mathit{\boldsymbol{v}}),</math> where <math display="inline">v</math> is <math display="inline">v\times 1</math> vector of variance parameters corresponding to <math display="inline">\Phi .</math> Also <math display="inline">\Phi</math>  and <math display="inline">\epsilon</math>  are independent”
  
 
Then derived the stochastic restricted estimator of <math display="inline">\beta</math>  and the stochastic restricted predictor of <math display="inline">\mathit{\boldsymbol{u}}</math> respectively, as
 
Then derived the stochastic restricted estimator of <math display="inline">\beta</math>  and the stochastic restricted predictor of <math display="inline">\mathit{\boldsymbol{u}}</math> respectively, as
 +
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
Line 83: Line 93:
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>{\tilde{u}}_{r}=G{Z}^{'}{H}^{-1}\left( y-X{\tilde{\beta }}_{r}\right) <br/></math>
+
| <math>{\tilde{u}}_{r}=G{Z}^{'}{H}^{-1}\left( y-X{\tilde{\beta }}_{r}\right) </math>
 
|}
 
|}
 +
 
Furthermore, they obtained the stochastic restricted ridge estimator of <math display="inline">\beta</math>  and the stochastic restricted ridge predictor of <math display="inline">\mathit{\boldsymbol{u}}</math> respectively, as
 
Furthermore, they obtained the stochastic restricted ridge estimator of <math display="inline">\beta</math>  and the stochastic restricted ridge predictor of <math display="inline">\mathit{\boldsymbol{u}}</math> respectively, as
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\tilde{\mathit{\boldsymbol{\beta }}},(k)={\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{H}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}}+\right. </math><math>\left. {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{r}}\right)</math>  
+
| <math>\tilde{\mathit{\boldsymbol{\beta }}},(k)={\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{H}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}}+ {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{r}}\right)</math>  
 
|}
 
|}
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
{|class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>{\tilde{\mathit{\boldsymbol{u}}}}_{r}(k)=\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\left( \mathit{\boldsymbol{y}}-\right. </math><math>\left. \mathit{\boldsymbol{X}}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k)\right) <br/></math>
+
| <math>{\tilde{\mathit{\boldsymbol{u}}}}_{r}(k)=\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\left( \mathit{\boldsymbol{y}}- \mathit{\boldsymbol{X}}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k)\right) </math>
 
|}
 
|}
In this article, we obtain the new two parameter estimations in linear mixed models by taking Yang and Chang’s ideas [11] and considering restriction <math display="inline">(d-</math><math>k)\tilde{\beta }(k)=\beta +{\epsilon }_{0}.</math> In Section <math display="inline">2,</math> we follow the idea of Henderson’s mixed model equations to get the two parameter estimator. Then, by setting stochastic linear restrictions <math display="inline">r=</math><math>R\beta +\Phi</math>  on the vector of fixed effects parameters, we derive the stochastic restricted two parameter estimation. In Section <math display="inline">3,</math> estimates for the variance parameters are obtained when unknown. In Section <math display="inline">4,</math> under the mean square error matrix (MSEM) sense we offer comparisons of new two parameter estimators. In Section <math display="inline">5,</math> Methods are proposed for estimating of the biasing parameters. In Section 6 and 7 a simulation study and a real data analysis is given. Finally, summary and some conclusions are given in Section 8.
 
  
==2. The Proposed Estimators==
+
In this article, we obtain the new two parameter estimations in linear mixed models by taking Yang and Chang’s ideas [11] and considering restriction <math display="inline">(d-</math><math>k)\tilde{\beta }(k)=\beta +{\epsilon }_{0}.</math> In Section <math display="inline">2,</math> we follow the idea of Henderson’s mixed model equations to get the two parameter estimator. Then, by setting stochastic linear restrictions <math display="inline">r=</math><math>R\beta +\Phi</math>  on the vector of fixed effects parameters, we derive the stochastic restricted two parameter estimation. In Section 3, estimates for the variance parameters are obtained when unknown. In Section 4, under the mean square error matrix (MSEM) sense we offer comparisons of new two parameter estimators. In Section 5, Methods are proposed for estimating of the biasing parameters. In Sections 6 and 7, a simulation study and a real data analysis is given. Finally, summary and some conclusions are given in Section 8.
  
<br/>Under model (1), we have
+
==2. The proposed estimators==
 +
 
 +
Under model (1), we have
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
 
| <math>\left[ \begin{matrix}u\\y\end{matrix}\right] \sim N\left( \left[ \begin{matrix}0\\X\mathit{\boldsymbol{\beta }}\end{matrix}\right] ,{\sigma }^{2}\left[ \begin{matrix}G&G{Z}^{'}\\ZG&H\end{matrix}\right] \right)</math>  
 
| <math>\left[ \begin{matrix}u\\y\end{matrix}\right] \sim N\left( \left[ \begin{matrix}0\\X\mathit{\boldsymbol{\beta }}\end{matrix}\right] ,{\sigma }^{2}\left[ \begin{matrix}G&G{Z}^{'}\\ZG&H\end{matrix}\right] \right)</math>  
 
|}
 
|}
 
  
 
and the joint distribution of <math display="inline">u</math> and <math display="inline">y</math> is given by
 
and the joint distribution of <math display="inline">u</math> and <math display="inline">y</math> is given by
Line 111: Line 122:
 
|}
 
|}
  
 
+
where <math display="inline">G</math> and <math display="inline">W</math> are nonsingular. If the restriction used by Yang and Ghang [11] in linear regression is transferred to linear mixed model, we can produce the two parameter estimator using “penalized term” idea. So by unifying restriction <math display="inline">(d-</math><math>k)\tilde{\beta }(k)=\beta +{\epsilon }_{0}</math> with model (1) to give
<span id='_GoBack'></span>Where <math display="inline">G</math> and <math display="inline">W</math> are nonsingular. If the restriction used by Yang and Ghang [11] in linear regression is transferred to linear mixed model, we can produce the two parameter estimator using “penalized term” idea. So by unifying restriction <math display="inline">(d-</math><math>k)\tilde{\beta }(k)=\beta +{\epsilon }_{0}</math> with model (1) to give<br/>
+
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 123: Line 133:
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (2)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (2)
 
|}
 
|}
{|class="formulaSCP" style="width: 100%; text-align: center;"  
+
 
 +
where
 +
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>~where~{\epsilon }_{0}\sim N\left( 0,{I}_{\ast }\right) ~with~{I}_{\ast }={\sigma }^{2}{I}_{p},{y}_{\ast }=</math><math>\left[ \begin{matrix}y\\(d-k)\tilde{\beta }(k)\end{matrix}\right] ,{X}_{\ast }=\left[ \begin{matrix}X\\{I}_{p}\end{matrix}\right] ,{Z}_{\star }=</math><math>\left[ \begin{matrix}Z\\0\end{matrix}\right] ,{\, }{\epsilon }_{\ast }=\left[ \begin{matrix}\epsilon \\{\epsilon }_{0}\end{matrix}\right]</math>  
+
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
| <math>{\epsilon }_{0}\sim N\left( 0,{I}_{\ast }\right)</math>
 
|}
 
|}
 +
|}
 +
with
  
 +
{| class="formulaSCP" style="width: 100%; text-align: center;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
|<math>\begin{array}{lll}
 +
{I}_{\ast }={\sigma }^{2}{I}_{p}, & \quad {y}_{\ast }=\left[ \begin{matrix}y\\(d-k)\tilde{\beta }(k)\end{matrix}\right] ,& \quad {X}_{\ast }=\left[ \begin{matrix}X\\{I}_{p}\end{matrix}\right] ,\\
 +
{Z}_{\star }=\left[ \begin{matrix}Z\\0\end{matrix}\right]  ,& \quad {\epsilon }_{\ast }=\left[ \begin{matrix}\epsilon \\{\epsilon }_{0}\end{matrix}\right], &\quad {W}_{\star }=\left[ \begin{matrix}W&0\\0&{I}_{\star }\end{matrix}\right]\end{array}</math>
 +
|}
 +
|}
  
and <math display="inline">{W}_{\star }=\left[ \begin{matrix}W&0\\0&{I}_{\star }\end{matrix}\right] .</math> Then <math display="inline">u</math> and <math display="inline">{y}_{\ast }</math> are jointly distributed as <math display="inline">\left[ \begin{matrix}u\\{y}_{\star }\end{matrix}\right] \sim N\left[ \left[ \begin{matrix}0\\{X}_{\star }\end{matrix}\right] \right] ,{\sigma }^{2}\left[ \begin{matrix}G&G{Z}_{\star }^{'}\\{Z}_{\star }G&{H}_{\star }\end{matrix}\right]</math>
+
Then <math display="inline">u</math> and <math display="inline">{y}_{\ast }</math> are jointly distributed as  
 +
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
| <math>\left[ \begin{matrix}u\\{y}_{\star }\end{matrix}\right] \sim N\left[ \left[ \begin{matrix}0\\{X}_{\star }\end{matrix}\right] \right] ,{\sigma }^{2}\left[ \begin{matrix}G&G{Z}_{\star }^{'}\\{Z}_{\star }G&{H}_{\star }\end{matrix}\right]</math>
 +
|}
 +
|}
  
 
where <math display="inline">{H}_{\star }=Z,G{Z}_{\star }^{'}+{W}_{\star }=\left[ \begin{matrix}H&0\\0&{I}_{\star }\end{matrix}\right]</math> .The conditional distribution of <math display="inline">{y}_{\ast }</math> given <math display="inline">u</math> is <math display="inline">{y}_{\ast }\mid u\sim N\left( X,\beta +\right. </math><math>\left. {Z}_{\ast }u,{W}_{\ast }\right)</math>  and the logarithm joint density of <math display="inline">{y}_{\ast }</math> and <math display="inline">u</math> given by
 
where <math display="inline">{H}_{\star }=Z,G{Z}_{\star }^{'}+{W}_{\star }=\left[ \begin{matrix}H&0\\0&{I}_{\star }\end{matrix}\right]</math> .The conditional distribution of <math display="inline">{y}_{\ast }</math> given <math display="inline">u</math> is <math display="inline">{y}_{\ast }\mid u\sim N\left( X,\beta +\right. </math><math>\left. {Z}_{\ast }u,{W}_{\ast }\right)</math>  and the logarithm joint density of <math display="inline">{y}_{\ast }</math> and <math display="inline">u</math> given by
 +
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}\mathrm{ln}\,f(y,u)&\, =\frac{-1}{2{\sigma }^{2}}\left[ {\left( {y}_{\ast }-{X}_{+}\beta -{Z}_{\ast }u\right) }^{'}{W}_{\ast }^{-1}\left( {y}_{\ast }-{X}_{+}\beta -{Z}_{+}u\right) +{u}^{'}{G}^{-1}u\right] \\&\, -\frac{n+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\frac{1}{2}\mathrm{ln}\,\left| {W}_{\ast }\right| -\frac{1}{2}\mathrm{ln}\,\vert G\vert \end{matrix}</math>
+
| <math>\begin{array}{ll}\mathrm{ln}\,f(y,u)&\, =\displaystyle\frac{-1}{2{\sigma }^{2}}\left[ {\left( {y}_{\ast }-{X}_{+}\beta -{Z}_{\ast }u\right) }^{'}{W}_{\ast }^{-1}\left( {y}_{\ast }-{X}_{+}\beta -{Z}_{+}u\right) +{u}^{'}{G}^{-1}u\right] \\
 +
&\,\,\, -\displaystyle\frac{n+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\displaystyle\frac{1}{2}\mathrm{ln}\,\left| {W}_{\ast }\right| -\displaystyle\frac{1}{2}\mathrm{ln}\,\vert G\vert \end{array}</math>
 
|}
 
|}
  
 
+
The penalized log-likelihood function is obtained by succession <math display="inline">{y}_{\ast },{X}_{\ast },{Z}_{\ast }</math> and <math display="inline">{W}_{+in}</math> <math display="inline">{\left( {y}_{\ast }-{X}_{\ast }\beta -{Z}_{\ast }u\right) }^{'}{W}_{\ast }^{-1}\left( {y}_{\ast }-\right). </math><math>\left. {X}_{\ast }\beta -{Z}_{\ast }u\right) ,</math> as follows:
The penalized log-likelihood function is obtained by succession <math display="inline">{y}_{\ast },{X}_{\ast },{Z}_{\ast }</math> and <math display="inline">{W}_{+in}</math> <math display="inline">{\left( {y}_{\ast }-{X}_{\ast }\beta -{Z}_{\ast }u\right) }^{'}{W}_{\ast }^{-1}\left( {y}_{\ast }-\right. </math><math>\left. {X}_{\ast }\beta -{Z}_{\ast }u\right) ,</math> as follows:
+
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 145: Line 181:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">\begin{matrix}\mathrm{ln}\,f(y,u)=&\frac{-1}{2{\sigma }^{2}}\left[ {y}^{'}{W}^{-1}y+(d-k{)}^{2}{\tilde{\beta }}^{'}(k)\tilde{\beta }(k)-2{\beta }^{'}{X}^{'}{W}^{-1}y-2(d-k){\tilde{\beta }}^{'}(k)\beta \right. \\&\left. -2{u}^{'}{Z}^{-1}y+2{\beta }^{'}{X}^{'}{W}^{-1}Zu+{\beta }^{'}\left( X{W}^{-1}X+{I}_{p}\right) \beta +{u}^{'}{Z}^{'}{W}^{-1}Zu+{u}^{'}{G}^{-1}u\right] \\&\, -\frac{n+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\frac{1}{2}\mathrm{ln}\,\vert W\vert -\frac{1}{2}\mathrm{ln}\,\vert G\vert \end{matrix}</math>
+
| <math>\begin{array}{ll}\mathrm{ln}\,f(y,u) & =\displaystyle\frac{-1}{2{\sigma }^{2}}\left[ {y}^{'}{W}^{-1}y+(d-k{)}^{2}{\tilde{\beta }}^{'}(k)\tilde{\beta }(k)-2{\beta }^{'}{X}^{'}{W}^{-1}y-2(d-k){\tilde{\beta }}^{'}(k)\beta \right. \\&\left. -2{u}^{'}{Z}^{-1}y+2{\beta }^{'}{X}^{'}{W}^{-1}Zu+{\beta }^{'}\left( X{W}^{-1}X+{I}_{p}\right) \beta +{u}^{'}{Z}^{'}{W}^{-1}Zu+{u}^{'}{G}^{-1}u\right] \\
 +
&\, -\displaystyle\frac{n+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\displaystyle\frac{1}{2}\mathrm{ln}\,\vert W\vert -\displaystyle\frac{1}{2}\mathrm{ln}\,\vert G\vert \end{array}</math>
 
|}
 
|}
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (3)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (3)
 
|}
 
|}
  
 +
From Eq. (3), we get the partial derivative with respect to <math display="inline">\beta</math>  and <math display="inline">\mathit{\boldsymbol{u}},</math> then set the equations to zero and by using <math display="inline">\tilde{\beta }(k,d)</math> and <math display="inline">\tilde{u}(k,d)</math> to denote the solutions give
  
From Equation (3), we get the partial derivative with respect to <math display="inline">\beta</math>  and <math display="inline">\mathit{\boldsymbol{u}},</math> then set the equations to<br/>zero and by using <math display="inline">\tilde{\beta }(k,d)</math> and <math display="inline">\tilde{u}(k,d)</math> to denote the solutions give
+
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
|<math>\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) \tilde{\mathit{\boldsymbol{\beta }}}\left( k,d\right) +{\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}\tilde{\mathit{\boldsymbol{u}}}\left( k,d\right) ={\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}+\left( d-k\right) \tilde{\mathit{\boldsymbol{\beta }}}\left( k\right) </math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (4)
 +
|}
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) \tilde{\mathit{\boldsymbol{\beta }}}\left( k,d\right) +{\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}\tilde{\mathit{\boldsymbol{u}}}\left( k,d\right) ={\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}+\left( d-k\right) \tilde{\mathit{\boldsymbol{\beta }}}\left( k\right) \quad \quad \,\quad \quad  (4)\\{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}\left( k,d\right) +\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) \tilde{\mathit{\boldsymbol{u}}}\left( k,d\right) ={\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y\quad \quad \quad \,\quad \quad  (5)}}\end{matrix}</math>
+
|  
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
|<math>\mathit{\boldsymbol{Z}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}\left( k,d\right) +\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) \tilde{\mathit{\boldsymbol{u}}}\left( k,d\right) ={\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (5)
 
|}
 
|}
  
 +
By solving the Eq. (5),<math>\tilde{u}(k,d)</math> is
  
By solving the equation of <math display="inline">(5),\tilde{u}(k,d)</math> is
+
{| class="formulaSCP" style="width: 100%; text-align: center;"
 
+
|-
<math display="inline">\begin{matrix}\tilde{u}(k,d)=&{\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}-{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d)\right) \\&\, ={\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}(\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X\beta }}(k,d))\end{matrix}\quad \quad \quad \quad  (6)</math>  
+
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
|<math>\begin{array}{ll}\tilde{u}(k,d) & = {\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}-{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d)\right) \\
 +
& ={\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}(\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X\beta }}(k,d))\end{array}</math>  
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (6)
 +
|}
  
Using <math display="inline">\tilde{u}(k,d)</math> into the equation of (4) we get
+
Using <math display="inline">\tilde{u}(k,d)</math> into Eq. (4) we get
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 170: Line 229:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">\begin{matrix}\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) \tilde{\mathit{\boldsymbol{\beta }}}(k,d)+\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}{\left( \mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}\mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}(\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d))\\=\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}+(d-k)\tilde{\mathit{\boldsymbol{\beta }}}(k)\end{matrix}</math>
+
| <math>\begin{matrix}\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) \tilde{\mathit{\boldsymbol{\beta }}}(k,d)+\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}{\left( \mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}\mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}(\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d))\\=\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}+(d-k)\tilde{\mathit{\boldsymbol{\beta }}}(k)\end{matrix}</math>
 
|}
 
|}
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (7)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (7)
Line 183: Line 242:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">\tilde{\beta }(k,d)={\left( {X}^{'}{H}^{-1}X+{I}_{p}\right) }^{-1}\left( {X}^{'}{H}^{-1}y+\right. </math><math>\left. (d-k)\tilde{\beta }(k)\right)</math>
+
| <math>\tilde{\beta }(k,d)={\left( {X}^{'}{H}^{-1}X+{I}_{p}\right) }^{-1}\left( {X}^{'}{H}^{-1}y+ (d-k)\tilde{\beta }(k)\right)</math>
 
|}
 
|}
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (8)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (8)
Line 189: Line 248:
  
  
In equation <math display="inline">(8),</math> if we put <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k)=</math><math>{\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\mathit{\boldsymbol{k}}{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}},\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is obtained as follows<br/> <math display="inline">\tilde{\beta }(k,d)=</math><math>{\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. d{\mathit{\boldsymbol{I}}}_{p}\right) {\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}}\quad \quad \quad \quad \quad \quad (9)</math><br/>Due to <math display="inline">\left( {Z}^{'}{W}^{-1}Z+\right. </math><math>\left. {G}^{-1}\right) G{Z}^{'}={Z}^{'}{W}^{-1}ZG{Z}^{'}+{Z}^{'}={Z}^{'}{W}^{-1}\left( ZG{Z}^{'}+\right. </math><math>\left. W\right) ={Z}^{'}{W}^{-1}H,</math> we get<br/> <math display="inline">{\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}{\mathit{\boldsymbol{G}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}=</math><math>\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\quad \quad \quad \quad \quad \quad (10)</math>
+
In Eq. (8), if we put <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k)=</math><math>{\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\mathit{\boldsymbol{k}}{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}},\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is obtained as follows  
 
+
Using equation <math display="inline">(10),\tilde{u}(k,d)</math> equals to <math display="inline">\tilde{u}(k,d)=</math><math>GZ{H}^{-1}(y-X\tilde{\beta }(k,d))</math><br/>In section, we obtain the stochastic restricted two parameter estimation. For this, the stochastic Linear restrictions <math display="inline">r=</math><math>R\beta +\Phi</math>  can be unified to model (1) and the restriction <math display="inline">(d-</math><math>k)\tilde{\beta },(k)=\beta +{\epsilon }_{0}</math><br/>to give
+
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 198: Line 255:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">{\mathit{\boldsymbol{y}}}_{{r}^{\star }}={\mathit{\boldsymbol{X}}}_{{r}^{\star }}\mathit{\boldsymbol{\beta }}+</math><math>{\mathit{\boldsymbol{Z}}}_{r}\mathit{\boldsymbol{u}}+{\mathit{\boldsymbol{\epsilon }}}_{\ast ,}</math>
+
| <math display="inline">\tilde{\beta }(k,d)=</math><math>{\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. d{\mathit{\boldsymbol{I}}}_{p}\right) {\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}}</math>
 
|}
 
|}
| style="width: 5px;text-align: right;white-space: nowrap;" | (11)
+
| style="width: 5px;text-align: right;white-space: nowrap;" | (9)
 
|}
 
|}
  
 +
Due to <math display="inline">\left( {Z}^{'}{W}^{-1}Z+ {G}^{-1}\right) G{Z}^{'}={Z}^{'}{W}^{-1}ZG{Z}^{'}+{Z}^{'}={Z}^{'}{W}^{-1}\left( ZG{Z}^{'}+ W\right) ={Z}^{'}{W}^{-1}H,</math> we get
  
where <math display="inline">{y}_{{r}^{\ast }}=\left[ \begin{matrix}y\\(d-k){\tilde{\beta }}_{r}(k)\\r\end{matrix}\right] ,{X}_{{r}^{\ast }}=</math><math>\left[ \begin{matrix}X\\{I}_{p}\\R\end{matrix}\right] ,{Z}_{{r}^{\ast }}=\left[ \begin{matrix}Z\\0\\0\end{matrix}\right] ,{\epsilon }_{{r}^{\ast }}=</math><math>\left[ \begin{matrix}\epsilon \\{\epsilon }_{0}\\\Phi \end{matrix}\right]</math>  and <math display="inline">{W}_{{r}^{\star }}=</math><math>\left[ \begin{matrix}W&0&0\\0&{I}_{\ast }&0\\0&0&V\end{matrix}\right] .</math> Then<br/>The conditional distribution of <math display="inline">{y}_{\ast }</math> given <math display="inline">\mathit{\boldsymbol{u}}</math> is <math display="inline">{\mathit{\boldsymbol{y}}}_{{r}^{\ast }}\mid \mathit{\boldsymbol{u}}\sim \mathit{\boldsymbol{N}}\left( {\mathit{\boldsymbol{X}}}_{{r}^{\ast }}\mathit{\boldsymbol{\beta }}+\right. </math><math>\left. {\mathit{\boldsymbol{Z}}}_{{r}^{\ast }}\mathit{\boldsymbol{u}},{\mathit{\boldsymbol{W}}}_{{r}^{\ast }}\right)</math>  and the logarithm<br/>joint density of <math display="inline">{y}_{r}</math> and <math display="inline">u</math> given by
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}\mathrm{ln}\,g(\mathit{\boldsymbol{y}},\mathit{\boldsymbol{u}})=&\frac{-1}{2{\sigma }^{2}}\left[ {\left( {\mathit{\boldsymbol{y}}}_{{r}^{\ast }}-{\mathit{\boldsymbol{X}}}_{{r}^{\ast }}\mathit{\boldsymbol{\beta }}-{\mathit{\boldsymbol{Z}}}_{{r}^{\ast }}\mathit{\boldsymbol{u}}\right) }^{'}{\mathit{\boldsymbol{W}}}_{{p}^{\ast }}^{-1}\left( {\mathit{\boldsymbol{y}}}_{{r}^{\ast }}-{\mathit{\boldsymbol{X}}}_{{r}^{\ast }}\mathit{\boldsymbol{\beta }}-{\mathit{\boldsymbol{Z}}}_{{r}^{\ast }}\mathit{\boldsymbol{u}}\right) +{\mathit{\boldsymbol{u}}}^{'}{\mathit{\boldsymbol{G}}}^{-1}\mathit{\boldsymbol{u}}\right] \\&\, -\frac{n+m+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\frac{1}{2}\mathrm{ln}\,\left| {\mathit{\boldsymbol{W}}}_{r\ast }\right| -\frac{1}{2}\mathrm{ln}\,\vert \mathit{\boldsymbol{G}}\vert \end{matrix}</math>
+
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
|<math display="inline">{\left( {\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) }^{-1}{\mathit{\boldsymbol{G}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}=</math><math>\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (10)
 
|}
 
|}
  
 +
Using Eq. (10), <math display="inline">\tilde{u}(k,d)</math> equals to <math display="inline">\tilde{u}(k,d)=GZ{H}^{-1}(y-X\tilde{\beta }(k,d))</math>.
 +
 +
In section, we obtain the stochastic restricted two parameter estimation. For this, the stochastic Linear restrictions <math display="inline">r=R\beta +\Phi</math>  can be unified to model (1) and the restriction <math display="inline">(d-k)\tilde{\beta },(k)=\beta +{\epsilon }_{0}</math> to give
  
Substituting <math display="inline">{y}_{r},{X}_{{r}^{\ast }}{Z}_{{r}^{\ast }}</math> and <math display="inline">{W}_{{r}^{+}}</math> to <math display="inline">{\left( {y}_{{r}^{\ast }}-{X}_{{r}^{\ast }}\beta -{Z}_{{r}^{\ast }}u\right) }^{'}{W}_{{\gamma }^{\ast }}^{-1}\left( {y}_{{r}^{\ast }}-\right. </math><math>\left. {X}_{{r}^{\ast }}\beta -{Z}_{{r}^{\ast }}u\right) ,</math> the penalized<br/>log-likelihood function is obtained as follows:
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}\mathrm{ln}\,g(y,u)=\frac{-1}{2{\sigma }^{2}}\left[ {y}^{'}{W}^{-1}y+(d-k{)}^{2}{\tilde{\beta }}_{r}^{'}(k){\tilde{\beta }}_{r}(k)+{r}^{'}{V}^{-1}r-2{\beta }^{'}X{W}^{-1}y-2(d-k){\tilde{\beta }}_{r}^{'}(k)\beta \right. \\-2{u}^{'}{Z}^{'}{W}^{-1}y-2{\beta }^{'}{R}^{'}{V}^{-1}r+\beta {R}^{'}{V}^{-1}R\beta +\beta \left( X{W}^{-1}X+{I}_{p}\right) \beta +2{u}^{'}Z{W}^{-1}X\beta \\\left. +{u}^{'}Z{W}^{-1}Zu+{u}^{'}{G}^{-1}u\right] -\frac{n+m+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\frac{1}{2}\mathrm{ln}\,{W}_{{r}^{\star }}\left| -\frac{1}{2}ln\right| G\mid \end{matrix}\quad \quad \quad \quad \quad \quad \, (12)</math>
+
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
| <math display="inline">{\mathit{\boldsymbol{y}}}_{{r}^{\star }}={\mathit{\boldsymbol{X}}}_{{r}^{\star }}\mathit{\boldsymbol{\beta }}+{\mathit{\boldsymbol{Z}}}_{r}\mathit{\boldsymbol{u}}+{\mathit{\boldsymbol{\epsilon }}}_{\ast ,}</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (11)
 
|}
 
|}
  
 +
where
  
From Equation (12), we get the partial derivative with respect to <math display="inline">\beta</math>  and <math display="inline">\mathit{\boldsymbol{u}},</math> then set the equations to zero and by using <math display="inline">{\tilde{\beta }}_{r}(k,d)</math> and <math display="inline">{\tilde{u}}_{r}(k,d)</math> to denote the solutions give
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)+\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}{\tilde{\mathit{\boldsymbol{u}}}}_{r}(k,d)=\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}+(d-k)\tilde{\mathit{\boldsymbol{\beta }}},(k)+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{r}} \quad \quad \quad \quad (13)\\{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}\left( k,d\right) +\left( \mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) {\tilde{\mathit{\boldsymbol{u}}}}_{r}\left( k,d\right) =\mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}\quad \quad \quad \quad (14)\end{matrix}</math>
+
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
|<math>{y}_{{r}^{\ast }}=\left[ \begin{matrix}y\\(d-k){\tilde{\beta }}_{r}(k)\\r\end{matrix}\right] ,\quad {X}_{{r}^{\ast }}=\left[ \begin{matrix}X\\{I}_{p}\\R\end{matrix}\right] ,\quad {Z}_{{r}^{\ast }}=\left[ \begin{matrix}Z\\0\\0\end{matrix}\right] ,\quad{\epsilon }_{{r}^{\ast }}=\left[ \begin{matrix}\epsilon \\{\epsilon }_{0}\\\Phi \end{matrix}\right],  \quad{W}_{{r}^{\star }}=\left[ \begin{matrix}W&0&0\\0&{I}_{\ast }&0\\0&0&V\end{matrix}\right]</math>
 +
|}
 
|}
 
|}
  
 +
Then, the conditional distribution of <math display="inline">{y}_{\ast }</math> given <math display="inline">\mathit{\boldsymbol{u}}</math> is <math display="inline">{\mathit{\boldsymbol{y}}}_{{r}^{\ast }}\mid \mathit{\boldsymbol{u}}\sim \mathit{\boldsymbol{N}}\left( {\mathit{\boldsymbol{X}}}_{{r}^{\ast }}\mathit{\boldsymbol{\beta }}+ {\mathit{\boldsymbol{Z}}}_{{r}^{\ast }}\mathit{\boldsymbol{u}},{\mathit{\boldsymbol{W}}}_{{r}^{\ast }}\right)</math>  and the logarithm joint density of <math display="inline">{y}_{r}</math> and <math display="inline">u</math> given by
  
By solving these equations similar to equations (4) and (5), the following results are obtained
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}\left( k,d\right) =&{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+d{\mathit{\boldsymbol{I}}}_{p}\right) \\&\, \times {\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}}+{\mathit{\boldsymbol{R}}}^{\boldsymbol{'}}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{r}}\right) \\{\tilde{\mathit{\boldsymbol{u}}}}_{r}\left( k,d\right) &\, =\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\left( \mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}\left( k,d\right) \right) \quad \quad \, \, \quad \quad \quad \quad (16)\end{matrix}\quad \quad \quad \quad (15)</math>
+
| <math>\begin{array}{ll} \mathrm{ln}\,g(\mathit{\boldsymbol{y}},\mathit{\boldsymbol{u}}) = & \displaystyle\frac{-1}{2{\sigma }^{2}}\left[ {\left( {\mathit{\boldsymbol{y}}}_{{r}^{\ast }}-{\mathit{\boldsymbol{X}}}_{{r}^{\ast }}\mathit{\boldsymbol{\beta }}-{\mathit{\boldsymbol{Z}}}_{{r}^{\ast }}\mathit{\boldsymbol{u}}\right) }^{'}{\mathit{\boldsymbol{W}}}_{{p}^{\ast }}^{-1}\left( {\mathit{\boldsymbol{y}}}_{{r}^{\ast }}-{\mathit{\boldsymbol{X}}}_{{r}^{\ast }}\mathit{\boldsymbol{\beta }}-{\mathit{\boldsymbol{Z}}}_{{r}^{\ast }}\mathit{\boldsymbol{u}}\right) +{\mathit{\boldsymbol{u}}}^{'}{\mathit{\boldsymbol{G}}}^{-1}\mathit{\boldsymbol{u}}\right] \\
 +
& -\displaystyle\frac{n+m+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\displaystyle\frac{1}{2}\mathrm{ln}\,\left| {\mathit{\boldsymbol{W}}}_{r\ast }\right| -\displaystyle\frac{1}{2}\mathrm{ln}\,\vert \mathit{\boldsymbol{G}}\vert \end{array}</math>
 
|}
 
|}
  
==3.Estimation of variance parameters==
 
  
<br/>In linear mixed models, the variance parameter within <math display="inline">G</math> and <math display="inline">W</math> are often unknown that several methods have been proposed by Searle <math display="inline">[16,20-</math><math>22]</math> to estimate them. In this section, we estimate the variance parameters using the ML method. The marginal distribution of <math display="inline">{y}_{\ast }</math> is <math display="inline">N\left( \mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }},{\sigma }^{2}{\mathit{\boldsymbol{H}}}_{\ast }\right) ,</math> therefore we can write the marginal log-likelihood function of <math display="inline">\mathit{\boldsymbol{y}}</math>
+
Substituting <math display="inline">{y}_{r},{X}_{{r}^{\ast }}{Z}_{{r}^{\ast }}</math> and <math display="inline">{W}_{{r}^{+}}</math> to <math display="inline">{\left( {y}_{{r}^{\ast }}-{X}_{{r}^{\ast }}\beta -{Z}_{{r}^{\ast }}u\right) }^{'}{W}_{{\gamma }^{\ast }}^{-1}\left( {y}_{{r}^{\ast }}- {X}_{{r}^{\ast }}\beta -{Z}_{{r}^{\ast }}u\right)</math>, the penalized log-likelihood function is obtained as follows:
 +
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>{l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }},{\mathit{\boldsymbol{y}}}_{\ast }\right) =</math><math>\frac{-1}{2{\sigma }^{2}}\left\{ {\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}\right) }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-\right. \right. </math><math>\left. \left. {\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}\right) \right\} -</math><math>\frac{n+p}{2}\mathrm{ln}\,\left( 2\pi {\mathit{\boldsymbol{\sigma }}}^{2}\right) -</math><math>\frac{1}{2}\mathrm{ln}\,\left| {\mathit{\boldsymbol{H}}}_{\ast }\right| \quad (17)</math>
+
|  
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
<math>\begin{matrix}\mathrm{ln}\,g(y,u)=\displaystyle\frac{-1}{2{\sigma }^{2}}\left[ {y}^{'}{W}^{-1}y+(d-k{)}^{2}{\tilde{\beta }}_{r}^{'}(k){\tilde{\beta }}_{r}(k)+{r}^{'}{V}^{-1}r-2{\beta }^{'}X{W}^{-1}y-2(d-k){\tilde{\beta }}_{r}^{'}(k)\beta \right. \\-2{u}^{'}{Z}^{'}{W}^{-1}y-2{\beta }^{'}{R}^{'}{V}^{-1}r+\beta {R}^{'}{V}^{-1}R\beta +\beta \left( X{W}^{-1}X+{I}_{p}\right) \beta +2{u}^{'}Z{W}^{-1}X\beta \\\left. +{u}^{'}Z{W}^{-1}Zu+{u}^{'}{G}^{-1}u\right] -\displaystyle\frac{n+m+p+q}{2}\mathrm{ln}\,\left( 2\pi {\sigma }^{2}\right) -\displaystyle\frac{1}{2}\mathrm{ln}\,{W}_{{r}^{\star }}\left| -\displaystyle\frac{1}{2}ln\right| G\mid \end{matrix}</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (12)
 
|}
 
|}
  
  
where <math display="inline">\phi ={\left( {\mathit{\boldsymbol{K}}}^{'},{\sigma }^{2}\right) }^{'},\mathit{\boldsymbol{k}}=</math><math>{\left( {\zeta }^{'},{\xi }^{'}\right) }^{'}</math> and are <math display="inline">\left( {r}_{1}+\right. </math><math>\left. {r}_{2}+1\right) \times 1</math> and <math display="inline">\left( {r}_{1}+\right. </math><math>\left. {r}_{2}\right) \times 1</math> vectors of unknown<br/>parameters, respectively. Differentiating the equation (17) with respect to <math display="inline">\beta ,{\sigma }^{2}</math> and <math display="inline">{\kappa }_{j}=</math><math>1,\ldots ,{r}_{1}+{r}_{2}</math> the partial derivatives is obtained as
+
From Eq. (12), we get the partial derivative with respect to <math display="inline">\beta</math> and <math display="inline">\mathit{\boldsymbol{u}},</math> then set the equations to zero and by using <math display="inline">{\tilde{\beta }}_{r}(k,d)</math> and <math display="inline">{\tilde{u}}_{r}(k,d)</math> to denote the solutions give
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 247: Line 326:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">\frac{\partial {l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }};{\mathit{\boldsymbol{y}}}_{\ast }\right) }{\partial \mathit{\boldsymbol{\beta }}}=</math><math>\frac{-1}{{\sigma }^{2}}\left( {\mathit{\boldsymbol{X}}}_{\ast }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}-\right. </math><math>\left. {\mathit{\boldsymbol{X}}}_{\ast }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\mathit{\boldsymbol{y}}}_{\ast }\right) ,\quad \quad \quad \quad (18)</math>  <br/> <math display="inline">\frac{\partial {l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }};{\mathit{\boldsymbol{y}}}_{\ast }\right) }{\partial {\sigma }^{2}}=</math><math>\frac{-(n+p)}{2{\sigma }^{2}}+\frac{{\left( {\mathit{\boldsymbol{y}}}_{\ast }-\mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }}\right) }^{'}{\mathit{\boldsymbol{H}}}_{\cdot }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-\mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }}\right) }{2{\sigma }^{4}},\quad \quad \quad \quad (19)</math> <br/> <math display="inline">\frac{\partial {l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }};{\mathit{\boldsymbol{y}}}_{\ast }\right) }{\partial {\kappa }_{i}}=</math><math>\frac{-1}{2}\mathrm{tr}\,\left( {\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\dot{\mathit{\boldsymbol{H}}}}_{{\ast }_{j}}\right) +</math><math>\frac{{\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}\right) }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\dot{\mathit{\boldsymbol{H}}}}_{\ast }{\mathit{\boldsymbol{H}}}_{\ast }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-\mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }}\right) }{2{\sigma }^{2}} \quad \quad \quad \quad (20)</math>
+
| <math>\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)+\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}{\tilde{\mathit{\boldsymbol{u}}}}_{r}(k,d)=\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}+(d-k)\tilde{\mathit{\boldsymbol{\beta }}},(k)+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{r}} </math>
 
|}
 
|}
| style="width: 5px;text-align: right;white-space: nowrap;" |
+
| style="width: 5px;text-align: right;white-space: nowrap;" | (13)
 
|}
 
|}
  
 
<br/>where <math display="inline">{\dot{H}}_{\ast j}=\partial {H}_{\ast }/\partial {\kappa }_{j}</math>. Setting equations <math display="inline">(18)-</math><math>(20)</math> equal to zero and using <math display="inline">\tilde{\beta }(k,d),{\tilde{\sigma }}^{2}(k,d)</math><br/>and <math display="inline">\tilde{H}</math> and instead of <math display="inline">\mathit{\boldsymbol{\beta }},{\sigma }^{2}</math> and <math display="inline">\mathit{\boldsymbol{H}}</math> gives
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}\&X{\tilde{H}}_{\ast }^{-1}X,\tilde{\beta }(k,d)={X}^{'}{\tilde{H}}_{\ast }^{-1}{y}_{\star }\quad \, \, \quad \quad \quad \quad (21)\\\&(n+p){\tilde{\sigma }}^{2}(k,d)={\left( {y}_{\star }-{X}_{\ast }\tilde{\beta }\left( k,d\right) \right) }^{'}{\tilde{H}}_{\ast }^{-1}\left( {y}_{\star }-{X}_{\star }\tilde{\beta }\left( k,d\right) \right) \quad \quad \quad \, \quad \quad \quad \quad (22)\\\&\mathrm{tr}\,\left( {\tilde{H}}_{\ast }^{-1}{\tilde{H}}_{\ast j}\right) =\frac{{\left( {y}_{\star }-{X}_{\ast }\tilde{\beta }\left( k,d\right) \right) }^{'}{\tilde{H}}_{\ast }^{-1}{\tilde{H}}_{\ast j}{\tilde{H}}_{\ast }^{-1}\left( {y}_{\star }-{X}_{\star }\tilde{\beta }\left( k,d\right) \right) }{{\tilde{\sigma }}^{2}\left( k,d\right) }\quad \quad \quad \quad \quad (23)\end{matrix}</math>
+
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
| <math>{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{X}}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}\left( k,d\right) +\left( \mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{Z}}+{\mathit{\boldsymbol{G}}}^{-1}\right) {\tilde{\mathit{\boldsymbol{u}}}}_{r}\left( k,d\right) =\mathit{\boldsymbol{Z}}{\mathit{\boldsymbol{W}}}^{-1}\mathit{\boldsymbol{y}}</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (14)
 
|}
 
|}
  
  
solving the equations (21) and (23) yields the estimators
+
By solving these equations similar to Eqs. (4) and (5), the following results are obtained
 +
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\begin{matrix}\tilde{\mathit{\boldsymbol{\beta }}}(k,d)\&={\left( {\mathit{\boldsymbol{X}}}_{\ast }^{'}{\tilde{\mathit{\boldsymbol{H}}}}_{\ast }^{-1}{\mathit{\boldsymbol{X}}}_{\ast }\right) }^{-1}{\mathit{\boldsymbol{X}}}_{\ast }^{'}{\tilde{\mathit{\boldsymbol{H}}}}_{\ast }^{-1}{\mathit{\boldsymbol{y}}}_{\ast }\\\&={\left( {\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{X}}+d{\mathit{\boldsymbol{I}}}_{p}\right) {\left( {\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{X}}+\mathit{\boldsymbol{k}}{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{y\quad \quad \quad \quad \quad (24)}}\\{\tilde{\sigma }}^{2}(k,d)\&=\frac{1}{(n+p)}{\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\tilde{\mathit{\boldsymbol{\beta }}}(k,d)\right) }^{'}{\tilde{\mathit{\boldsymbol{H}}}}_{\cdot }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\tilde{\mathit{\boldsymbol{\beta }}}(k,d)\right) \\=\&\frac{1}{(n+p)}\left\{ (\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d){)}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}(\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d))\right. \\+\&\left. (\tilde{\mathit{\boldsymbol{\beta }}}(k,d)-(d-k)\tilde{\mathit{\boldsymbol{\beta }}}(k){)}^{'}(\tilde{\mathit{\boldsymbol{\beta }}}(k,d)-(d-k)\tilde{\mathit{\boldsymbol{\beta }}}(k))\right\} \quad \, \, \quad \quad \quad \quad (25)\end{matrix}</math>
+
|  
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
<math>\begin{array}{ll}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}\left( k,d\right) & ={\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+d{\mathit{\boldsymbol{I}}}_{p}\right) \\
 +
& \times {\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{y}}+{\mathit{\boldsymbol{R}}}^{\boldsymbol{'}}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{r}}\right)\end{array}</math>  
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (15)
 
|}
 
|}
  
 +
{| class="formulaSCP" style="width: 100%; text-align: center;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;"
 +
|-
 +
|  <math>{\tilde{\mathit{\boldsymbol{u}}}}_{r}\left( k,d\right) =\mathit{\boldsymbol{G}}{\mathit{\boldsymbol{Z}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\left( \mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}\left( k,d\right) \right)  </math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" | (16)
 +
|}
  
Equation (23) depends on <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> and <math display="inline">{\tilde{\sigma }}^{2}(k,d),</math> so iterative procedures must be used to solve<br/> <math display="inline">{K}_{j}</math> 's. In the statistical literature, there are four iterative procedures to estimates variance parameters, which include: "Newton-Raphson (NR), Expectation Maximization algorithm (EM),
+
==3. Estimation of variance parameters==
  
Fisher Scoring (FS) and the Average Information (AI) algorithms". See [22] for details of these procedures. Note that in the stochastic restricted two parameter methods, the ML estimators is obtained similar equations (21),(22) and <math display="inline">(23).</math>
+
In linear mixed models, the variance parameter within <math display="inline">G</math> and <math display="inline">W</math> are often unknown that several methods have been proposed by [16,20-22]
 +
to estimate them. In this section, we estimate the variance parameters using the ML method. The marginal distribution of <math display="inline">{y}_{\ast }</math> is <math display="inline">N\left( \mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }},{\sigma }^{2}{\mathit{\boldsymbol{H}}}_{\ast }\right)</math>, therefore we can write the marginal log-likelihood function of <math display="inline">\mathit{\boldsymbol{y}}</math>
  
==4. Comparison of estimators==
+
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" | <math>{l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }},{\mathit{\boldsymbol{y}}}_{\ast }\right) =</math><math>\displaystyle\frac{-1}{2{\sigma }^{2}}\left\{ {\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}\right) }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-\right. \right. </math><math>\left. \left. {\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}\right) \right\} -</math><math>\frac{n+p}{2}\mathrm{ln}\,\left( 2\pi {\mathit{\boldsymbol{\sigma }}}^{2}\right) -</math><math> \displaystyle\frac{1}{2}\mathrm{ln}\,\left| {\mathit{\boldsymbol{H}}}_{\ast }\right| </math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(17)
 +
|}
  
In this section, we compare the estimator <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> with <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and the estimator <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> with <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math> using the mean squares error matrix (MSEM) sense. The estimator <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{2}</math> is superior to <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{1}</math> with respect to the MSEM sense, if and only if <math display="inline">\Delta \left( {\tilde{\beta }}_{1},{\tilde{\beta }}_{2}\right) =</math><math>MSEM\left( {\tilde{\beta }}_{1}\right) -MSEM\left( {\tilde{\beta }}_{2}\right) >0,</math> that is,<br/> <math display="inline">\Delta \left( {\tilde{\beta }}_{1},{\tilde{\beta }}_{2}\right)</math> is a positive definite (pd) matrix. The mean-square error matrix for the estimator <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is given as
+
where <math display="inline">\phi ={\left( {\mathit{\boldsymbol{K}}}^{'},{\sigma }^{2}\right) }^{'},\mathit{\boldsymbol{k}}={\left( {\zeta }^{'},{\xi }^{'}\right) }^{'}</math> and <math display="inline">\left( {r}_{1}+{r}_{2}+1\right) \times 1</math> and <math display="inline">\left( {r}_{1}+ {r}_{2}\right) \times 1</math> are vectors of unknown parameters, respectively. Differentiating the Eq. (17) with respect to <math display="inline">\beta ,{\sigma }^{2}</math> and <math display="inline">{\kappa }_{j}=</math><math>1,\ldots ,{r}_{1}+{r}_{2}</math> the partial derivatives is obtained as
{| class="formulaSCP" style="width: 100%; text-align: center;"  
+
 
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"  
 
|-
 
|-
| <math>\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})=\mathrm{Var}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))+\mathrm{bias}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))\mathrm{bias}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d){)}^{'}</math>
+
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" | <math>\displaystyle\frac{\partial {l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }};{\mathit{\boldsymbol{y}}}_{\ast }\right) }{\partial \mathit{\boldsymbol{\beta }}}=</math><math>\displaystyle\frac{-1}{{\sigma }^{2}}\left( {\mathit{\boldsymbol{X}}}_{\ast }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}-\right. </math><math>\left. {\mathit{\boldsymbol{X}}}_{\ast }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\mathit{\boldsymbol{y}}}_{\ast }\right) ,</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(18)
 
|}
 
|}
  
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" |<math>\displaystyle\frac{\partial {l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }};{\mathit{\boldsymbol{y}}}_{\ast }\right) }{\partial {\sigma }^{2}}=</math><math>\displaystyle\frac{-(n+p)}{2{\sigma }^{2}}+\displaystyle\frac{{\left( {\mathit{\boldsymbol{y}}}_{\ast }-\mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }}\right) }^{'}{\mathit{\boldsymbol{H}}}_{\cdot }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-\mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }}\right) }{2{\sigma }^{4}},</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(19)
 +
|}
 +
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" |<math>\displaystyle\frac{\partial {l}_{ML}\left( \mathit{\boldsymbol{\beta }},\mathit{\boldsymbol{\phi }};{\mathit{\boldsymbol{y}}}_{\ast }\right) }{\partial {\kappa }_{i}}=</math><math>\displaystyle\frac{-1}{2}\mathrm{tr}\,\left( {\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\dot{\mathit{\boldsymbol{H}}}}_{{\ast }_{j}}\right) +</math><math>\displaystyle\frac{{\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\mathit{\boldsymbol{\beta }}\right) }^{'}{\mathit{\boldsymbol{H}}}_{\ast }^{-1}{\dot{\mathit{\boldsymbol{H}}}}_{\ast }{\mathit{\boldsymbol{H}}}_{\ast }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-\mathit{\boldsymbol{X}}\cdot \mathit{\boldsymbol{\beta }}\right) }{2{\sigma }^{2}} </math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(20)
 +
|}
 +
 +
where <math display="inline">{\dot{H}}_{\ast j}=\partial {H}_{\ast }/\partial {\kappa }_{j}</math>. Setting Eqs. (18)-(20) equal to zero and using <math display="inline">\tilde{\beta }(k,d),{\tilde{\sigma }}^{2}(k,d)</math> and <math display="inline">\tilde{H}</math> and instead of <math display="inline">\mathit{\boldsymbol{\beta }},{\sigma }^{2}</math> and <math display="inline">\mathit{\boldsymbol{H}}</math> gives
 +
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" |<math>\&X{\tilde{H}}_{\ast }^{-1}X,\tilde{\beta }(k,d)={X}^{'}{\tilde{H}}_{\ast }^{-1}{y}_{\star }</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(21)
 +
|}
 +
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" |<math>(n+p){\tilde{\sigma }}^{2}(k,d)={\left( {y}_{\star }-{X}_{\ast }\tilde{\beta }\left( k,d\right) \right) }^{'}{\tilde{H}}_{\ast }^{-1}\left( {y}_{\star }-{X}_{\star }\tilde{\beta }\left( k,d\right) \right)</math> 
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(22)
 +
|}
 +
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" |<math>(\mathrm{tr}\,\left( {\tilde{H}}_{\ast }^{-1}{\tilde{H}}_{\ast j}\right) =\displaystyle\frac{{\left( {y}_{\star }-{X}_{\ast }\tilde{\beta }\left( k,d\right) \right) }^{'}{\tilde{H}}_{\ast }^{-1}{\tilde{H}}_{\ast j}{\tilde{H}}_{\ast }^{-1}\left( {y}_{\star }-{X}_{\star }\tilde{\beta }\left( k,d\right) \right) }{{\tilde{\sigma }}^{2}\left( k,d\right) }</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(23)
 +
|}
 +
 +
Solving Eqs. (21) and (23) yields the estimators
 +
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" |<math>\begin{array}{ll}\tilde{\mathit{\boldsymbol{\beta }}}(k,d)&={\left( {\mathit{\boldsymbol{X}}}_{\ast }^{'}{\tilde{\mathit{\boldsymbol{H}}}}_{\ast }^{-1}{\mathit{\boldsymbol{X}}}_{\ast }\right) }^{-1}{\mathit{\boldsymbol{X}}}_{\ast }^{'}{\tilde{\mathit{\boldsymbol{H}}}}_{\ast }^{-1}{\mathit{\boldsymbol{y}}}_{\ast }\\
 +
&={\left( {\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{X}}+d{\mathit{\boldsymbol{I}}}_{p}\right) {\left( {\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{X}}+\mathit{\boldsymbol{k}}{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\mathit{\boldsymbol{X}}}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}\mathit{\boldsymbol{y}}\end{array}</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(24)
 +
|}
 +
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" |<math>\begin{array}{ll}{\tilde{\sigma }}^{2}(k,d)&=\displaystyle\frac{1}{(n+p)}{\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\tilde{\mathit{\boldsymbol{\beta }}}(k,d)\right) }^{'}{\tilde{\mathit{\boldsymbol{H}}}}_{\cdot }^{-1}\left( {\mathit{\boldsymbol{y}}}_{\ast }-{\mathit{\boldsymbol{X}}}_{\ast }\tilde{\mathit{\boldsymbol{\beta }}}(k,d)\right) \\
 +
& =\displaystyle\frac{1}{(n+p)}\left\{ (\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d){)}^{'}{\tilde{\mathit{\boldsymbol{H}}}}^{-1}(\mathit{\boldsymbol{y}}-\mathit{\boldsymbol{X}}\tilde{\mathit{\boldsymbol{\beta }}}(k,d))\right. \\
 +
  & \,\,+ \left. (\tilde{\mathit{\boldsymbol{\beta }}}(k,d)-(d-k)\tilde{\mathit{\boldsymbol{\beta }}}(k){)}^{'}(\tilde{\mathit{\boldsymbol{\beta }}}(k,d)-(d-k)\tilde{\mathit{\boldsymbol{\beta }}}(k))\right\} \end{array}</math>
 +
|}
 +
| style="width: 5px;text-align: right;white-space: nowrap;" |(25)
 +
|}
 +
 +
 +
Eq. (23) depends on <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> and <math display="inline">{\tilde{\sigma }}^{2}(k,d),</math> so iterative procedures must be used to solve <math display="inline">{K}_{j}</math>'s. In the statistical literature, there are four iterative procedures to estimates variance parameters, which include: "Newton-Raphson (NR), Expectation Maximization algorithm (EM), Fisher Scoring (FS) and the Average Information (AI) algorithms". See [22] for details of these procedures. Note that in the stochastic restricted two parameter methods, the ML estimators is obtained similar Eqs. (21),(22) and (23).
 +
 +
==4. Comparison of estimators==
 +
 +
In this section, we compare the estimator <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> with <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and the estimator <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> with <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math> using the mean squares error matrix (MSEM) sense. The estimator <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{2}</math> is superior to <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{1}</math> with respect to the MSEM sense, if and only if <math display="inline">\Delta \left( {\tilde{\beta }}_{1},{\tilde{\beta }}_{2}\right) =</math><math>MSEM\left( {\tilde{\beta }}_{1}\right) -MSEM\left( {\tilde{\beta }}_{2}\right) >0,</math> that is, <math display="inline">\Delta \left( {\tilde{\beta }}_{1},{\tilde{\beta }}_{2}\right)</math>  is a positive definite (pd) matrix. The mean-square error matrix for the estimator <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is given as
 +
 +
{| class="formulaSCP" style="width: 100%; text-align: left;"
 +
|-
 +
|
 +
{| style="text-align: center; margin:auto;width: 100%;"
 +
|-
 +
| style="text-align: center;" | <math>\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})=\mathrm{Var}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))+\mathrm{bias}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))\mathrm{bias}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d){)}^{'}</math>
 +
|}
 +
|}
  
 
The variance matrix of <math display="inline">\tilde{\beta }(k,d)</math> is
 
The variance matrix of <math display="inline">\tilde{\beta }(k,d)</math> is
Line 285: Line 490:
 
| <math>V~ar~(\tilde{\beta }(k,d))={\sigma }^{2}{T}_{1}{X}^{'}{H}^{-1}X{T}_{1}^{'}</math>
 
| <math>V~ar~(\tilde{\beta }(k,d))={\sigma }^{2}{T}_{1}{X}^{'}{H}^{-1}X{T}_{1}^{'}</math>
 
|}
 
|}
 
  
 
where <math display="inline">{T}_{1}={\left( {X}^{'}{H}^{-1}X+{I}_{p}\right) }^{-1}\left( {X}^{'}{H}^{-1}X+\right. </math><math>\left. d{I}_{p}\right) {\left( {X}^{'}{H}^{-1}X+k{I}_{p}\right) }^{-1}.</math> The bias <math display="inline">(\tilde{\beta }(k,d))</math> is given as
 
where <math display="inline">{T}_{1}={\left( {X}^{'}{H}^{-1}X+{I}_{p}\right) }^{-1}\left( {X}^{'}{H}^{-1}X+\right. </math><math>\left. d{I}_{p}\right) {\left( {X}^{'}{H}^{-1}X+k{I}_{p}\right) }^{-1}.</math> The bias <math display="inline">(\tilde{\beta }(k,d))</math> is given as
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>~bias~(\tilde{\beta }(k,d))=E(\tilde{\beta }(k,d)-\beta )</math>
+
| bias <math>(\tilde{\beta }(k,d))=E(\tilde{\beta }(k,d)-\beta )=-{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( (k+1-\right. </math><math>\left. d)\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. k{\mathit{\boldsymbol{I}}}_{p}\right) \mathit{\boldsymbol{\beta }}</math>
 
|}
 
|}
{|class="formulaSCP" style="width: 100%; text-align: center;"  
+
 
 +
The mean-square error matrix for the estimator <math display="inline">{\tilde{\beta }}_{r}(k,d)</math> is given as
 +
 
 +
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>=-{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( (k+1-\right. </math><math>\left. d)\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. k{\mathit{\boldsymbol{I}}}_{p}\right) \mathit{\boldsymbol{\beta }}<br/></math>
+
| <math>\mathrm{MSEM}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d),\mathit{\boldsymbol{\beta }}\right) =</math><math>{\sigma }^{2}{\mathit{\boldsymbol{T}}}_{2}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}\right) {\mathit{\boldsymbol{T}}}_{2}^{'}+\mathrm{bias}\,\left( {\mathit{\boldsymbol{\beta }}}_{r}(k,d)\right) \mathrm{bias}\,{\left( {\mathit{\boldsymbol{\beta }}}_{r}(k,d)\right) }^{'}</math>
 
|}
 
|}
The mean-square error matrix for the estimator <math display="inline">{\tilde{\beta }}_{r}(k,d)</math> is given as
+
 
 +
where
 +
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\mathrm{MSEM}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d),\mathit{\boldsymbol{\beta }}\right) =</math><math>{\sigma }^{2}{\mathit{\boldsymbol{T}}}_{2}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}\right) {\mathit{\boldsymbol{T}}}_{2}^{'}+\mathrm{bias}\,\left( {\mathit{\boldsymbol{\beta }}}_{r}(k,d)\right) \mathrm{bias}\,{\left( {\mathit{\boldsymbol{\beta }}}_{r}(k,d)\right) }^{'}<br/></math>
+
| <math>{\mathit{\boldsymbol{T}}}_{2}={\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+\right. </math><math>\left. d{\mathit{\boldsymbol{I}}}_{p}\right) {\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{\boldsymbol{'}}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+\mathit{\boldsymbol{k}}{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}</math>
 
|}
 
|}
where <math display="inline">{\mathit{\boldsymbol{T}}}_{2}={\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\left( {\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. </math><math>\left. {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+\right. </math><math>\left. d{\mathit{\boldsymbol{I}}}_{p}\right) {\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{\boldsymbol{'}}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+\mathit{\boldsymbol{k}}{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}</math> and<br/>bias <math display="inline">\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right) =</math><math>-{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}</math><br/>
+
and  
 +
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\times \left( (k+1-d)\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+\right. \right. </math><math>\left. \left. {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}\right) +\right. </math><math>\left. k{\mathit{\boldsymbol{I}}}_{p}\right) \mathit{\boldsymbol{\beta }}</math>
+
| <math>\begin{array}{ll}{\rm  bias}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right) & =-{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}{\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+{\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}+k{\mathit{\boldsymbol{I}}}_{p}\right) }^{-1}\\
 +
& \times \left( (k+1-d)\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}+ {\mathit{\boldsymbol{R}}}^{'}{\mathit{\boldsymbol{V}}}^{-1}\mathit{\boldsymbol{R}}\right) + k{\mathit{\boldsymbol{I}}}_{p}\right) \mathit{\boldsymbol{\beta }}\end{array}</math>
 
|}
 
|}
  
  
===4.1. Comparison the Estimator  <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}\boldsymbol{(}\mathit{\boldsymbol{k}}\boldsymbol{,}\mathit{\boldsymbol{d}}\boldsymbol{)}</math>''' with ''' <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>===
+
===4.1 Comparison the Estimator ''&beta;&#771;''(''k'',''d'') with ''&beta;&#771;''===
 +
 
 +
The <math display="inline">MSEM</math> of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> is <math display="inline">\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}})=</math><math>{\sigma }^{2}{\mathit{\boldsymbol{A}}}^{-1},</math> where <math display="inline">{\mathit{\boldsymbol{A}}}_{1}=</math><math>\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}\right) ,</math> and the <math display="inline">MSEM</math> of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is
  
<br/>The <math display="inline">MSEM</math> of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> is <math display="inline">\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}})=</math><math>{\sigma }^{2}{\mathit{\boldsymbol{A}}}^{-1},</math> where <math display="inline">{\mathit{\boldsymbol{A}}}_{1}=</math><math>\left( \mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}\right) ,</math> and the <math display="inline">MSEM</math> of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
Line 317: Line 529:
  
  
The estimator <math display="inline">\tilde{\beta }(k,d)</math> is superior to the estimator <math display="inline">\tilde{\beta }</math> under the MSEM sense, if and only if<br/> <math display="inline">\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}})-\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))>0</math> then
+
The estimator <math display="inline">\tilde{\beta }(k,d)</math> is superior to the estimator <math display="inline">\tilde{\beta }</math> under the MSEM sense, if and only if <math display="inline">\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}})-\mathrm{MSEM}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))>0</math> then
 +
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
Line 324: Line 537:
  
  
According to Farebrother <math display="inline">(1976),</math> if <math display="inline">{\sigma }^{2}\left( {A}_{1}^{-1}-\right. </math><math>\left. {T}_{1}{A}_{1}{T}_{1}\right) >0,</math> then the necessary and sufficient condition for <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> to be superior to <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> is <math display="inline">{\mathit{\boldsymbol{\beta }}}^{'}{\left( {\mathit{\boldsymbol{T}}}_{1}{\mathit{\boldsymbol{A}}}_{1}-{\mathit{\boldsymbol{I}}}_{p}\right) }^{'}{\left[ {\sigma }^{2}\left( {\mathit{\boldsymbol{A}}}_{1}^{-1}-{\mathit{\boldsymbol{T}}}_{1}{\mathit{\boldsymbol{A}}}_{1}{\mathit{\boldsymbol{T}}}_{1}\right) \right] }^{-1}\left( {\mathit{\boldsymbol{T}}}_{1}{\mathit{\boldsymbol{A}}}_{1}-\right. </math><math>\left. {\mathit{\boldsymbol{I}}}_{p}\right) \mathit{\boldsymbol{\beta }}<1</math>
+
According to Farebrother <math display="inline">(1976),</math> if <math display="inline">{\sigma }^{2}\left( {A}_{1}^{-1}-\right. </math><math>\left. {T}_{1}{A}_{1}{T}_{1}\right) >0,</math> then the necessary and sufficient condition for <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> to be superior to <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> is <math display="inline">{\mathit{\boldsymbol{\beta }}}^{'}{\left( {\mathit{\boldsymbol{T}}}_{1}{\mathit{\boldsymbol{A}}}_{1}-{\mathit{\boldsymbol{I}}}_{p}\right) }^{'}{\left[ {\sigma }^{2}\left( {\mathit{\boldsymbol{A}}}_{1}^{-1}-{\mathit{\boldsymbol{T}}}_{1}{\mathit{\boldsymbol{A}}}_{1}{\mathit{\boldsymbol{T}}}_{1}\right) \right] }^{-1}\left( {\mathit{\boldsymbol{T}}}_{1}{\mathit{\boldsymbol{A}}}_{1}-\right. </math><math>\left. {\mathit{\boldsymbol{I}}}_{p}\right) \mathit{\boldsymbol{\beta }}<1</math>.
  
==5. Selection of Parameters  <math display="inline">\mathit{\boldsymbol{k}}</math> and <math display="inline">\mathit{\boldsymbol{d}}</math>==
+
==5. Selection of parameters '''''k''''' and '''''d'''''==
  
<br/>In the linear regression model, choosing the parameter <math display="inline">k</math> is important so many statisticians were suggested several methods for obtaining this parameter. These methods were proposed by many researchers <math display="inline">[23-</math><math>29]</math> and others. According to Ozkale and Can [18], "we rewrite model (1) in the form of a marginal model in which random effects are not explicitly defined:
+
In the linear regression model, choosing the parameter <math display="inline">k</math> is important so many statisticians were suggested several methods for obtaining this parameter. These methods were proposed by many researchers [23-29] and others. According to Ozkale and Can [18], we rewrite model (1) in the form of a marginal model in which random effects are not explicitly defined:
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 341: Line 554:
  
  
Because <math display="inline">\mathit{\boldsymbol{H}}</math> is pd, then there is a nonsingular symmetric matrix <math display="inline">\mathit{\boldsymbol{N}}</math> such that <math display="inline">\mathit{\boldsymbol{H}}=</math><math>{\mathit{\boldsymbol{N}}}^{'}\mathit{\boldsymbol{N}}</math>. If we multiply both sides of model (42) by <math display="inline">{\boldsymbol{N}}^{-1}</math>, we get <math display="inline">{\mathit{\boldsymbol{y}}}^{\ast }=</math><math>{\mathit{\boldsymbol{X}}}^{\ast }\mathit{\boldsymbol{\beta }}+{\mathit{\boldsymbol{S}}}^{\ast }</math>, where <math display="inline">\dot{\mathit{\boldsymbol{y}}}=</math><math>{\mathit{\boldsymbol{N}}}^{-1}\mathit{\boldsymbol{y}},{\mathit{\boldsymbol{X}}}^{\ast }=</math><math>{\mathit{\boldsymbol{N}}}^{-1}\mathit{\boldsymbol{X}}</math><br/> <math display="inline">{\mathit{\boldsymbol{S}}}^{\ast }=</math><math>{\mathit{\boldsymbol{N}}}^{-1}\mathit{\boldsymbol{S}}</math> and <math display="inline">\mathrm{Var}\,\left( {\mathit{\boldsymbol{S}}}^{\ast }\right) =</math><math>{\mathit{\boldsymbol{I}}}_{n}.</math> The matrix <math display="inline">{\mathit{\boldsymbol{X}}}^{\ast }{\mathit{\boldsymbol{X}}}^{\ast }=</math><math>{\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}</math> is symmetric, so there is an<br/>orthogonal matrix <math display="inline">\mathit{\boldsymbol{P}}</math> such that <math display="inline">{\mathit{\boldsymbol{P}}}^{'}\mathit{\boldsymbol{P}}=</math><math>\mathit{\boldsymbol{P}}{\mathit{\boldsymbol{P}}}^{'}={\mathit{\boldsymbol{I}}}_{p}</math> and <math display="inline">{\mathit{\boldsymbol{P}}}^{'}{\mathit{\boldsymbol{X}}}^{\ast }{\mathit{\boldsymbol{X}}}^{\ast }\mathit{\boldsymbol{P}}=</math><math>\boldsymbol{\Lambda }=\mathrm{diag}\,\left( {\lambda }_{1},\ldots ,{\lambda }_{p}\right) ,</math> where<br/> <math display="inline">{\lambda }_{1}\geq {\lambda }_{2}\geq \ldots \geq {\lambda }_{p}</math> are the ordered eigenvalues of <math display="inline">{\mathit{\boldsymbol{X}}}^{\ast }{\mathit{\boldsymbol{X}}}^{\ast }</math>. Then model (26) can be rewritten in a canonical form as <math display="inline">{\mathit{\boldsymbol{y}}}^{\ast }=</math><math>{\mathit{\boldsymbol{X}}}^{\ast \ast }\mathit{\boldsymbol{\gamma }}+{\mathit{\boldsymbol{S}}}^{\ast }</math>, where <math display="inline">{\mathit{\boldsymbol{X}}}^{\ast \ast }=</math><math>{\mathit{\boldsymbol{X}}}^{\ast }\mathit{\boldsymbol{P}}</math> and <math display="inline">\mathit{\boldsymbol{\gamma }}=</math><math>{\mathit{\boldsymbol{P}}}^{'}\mathit{\boldsymbol{\beta }}''</math>. Under this model, we get<br/>the following representation:
+
Because <math display="inline">\mathit{\boldsymbol{H}}</math> is pd, then there is a nonsingular symmetric matrix <math display="inline">\mathit{\boldsymbol{N}}</math> such that <math display="inline">\mathit{\boldsymbol{H}}=</math><math>{\mathit{\boldsymbol{N}}}^{'}\mathit{\boldsymbol{N}}</math>. If we multiply both sides of model (42) by <math display="inline">{\boldsymbol{N}}^{-1}</math>, we get <math display="inline">{\mathit{\boldsymbol{y}}}^{\ast }=</math><math>{\mathit{\boldsymbol{X}}}^{\ast }\mathit{\boldsymbol{\beta }}+{\mathit{\boldsymbol{S}}}^{\ast }</math>, where <math display="inline">\dot{\mathit{\boldsymbol{y}}}={\mathit{\boldsymbol{N}}}^{-1}\mathit{\boldsymbol{y}},{\mathit{\boldsymbol{X}}}^{\ast }=</math><math>{\mathit{\boldsymbol{N}}}^{-1}\mathit{\boldsymbol{X}}</math>,
 +
<math display="inline">{\mathit{\boldsymbol{S}}}^{\ast }={\mathit{\boldsymbol{N}}}^{-1}\mathit{\boldsymbol{S}}</math> and <math display="inline">\mathrm{Var}\,\left( {\mathit{\boldsymbol{S}}}^{\ast }\right) ={\mathit{\boldsymbol{I}}}_{n}</math>. The matrix <math display="inline">{\mathit{\boldsymbol{X}}}^{\ast }{\mathit{\boldsymbol{X}}}^{\ast }=</math><math>{\mathit{\boldsymbol{X}}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}</math> is symmetric, so there is an orthogonal matrix <math display="inline">\mathit{\boldsymbol{P}}</math> such that <math display="inline">{\mathit{\boldsymbol{P}}}^{'}\mathit{\boldsymbol{P}}=</math><math>\mathit{\boldsymbol{P}}{\mathit{\boldsymbol{P}}}^{'}={\mathit{\boldsymbol{I}}}_{p}</math> and <math display="inline">{\mathit{\boldsymbol{P}}}^{'}{\mathit{\boldsymbol{X}}}^{\ast }{\mathit{\boldsymbol{X}}}^{\ast }\mathit{\boldsymbol{P}}=</math><math>\boldsymbol{\Lambda }=\mathrm{diag}\,\left( {\lambda }_{1},\ldots ,{\lambda }_{p}\right) ,</math> where
 +
<math display="inline">{\lambda }_{1}\geq {\lambda }_{2}\geq \ldots \geq {\lambda }_{p}</math> are the ordered eigenvalues of <math display="inline">{\mathit{\boldsymbol{X}}}^{\ast }{\mathit{\boldsymbol{X}}}^{\ast }</math>. Then model (26) can be rewritten in a canonical form as <math display="inline">{\mathit{\boldsymbol{y}}}^{\ast }=</math><math>{\mathit{\boldsymbol{X}}}^{\ast \ast }\mathit{\boldsymbol{\gamma }}+{\mathit{\boldsymbol{S}}}^{\ast }</math>, where <math display="inline">{\mathit{\boldsymbol{X}}}^{\ast \ast }=</math><math>{\mathit{\boldsymbol{X}}}^{\ast }\mathit{\boldsymbol{P}}</math> and <math display="inline">\mathit{\boldsymbol{\gamma }}=</math><math>{\mathit{\boldsymbol{P}}}^{'}\mathit{\boldsymbol{\beta }}''</math>. Under this model, we get the following representation:
 +
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
Line 347: Line 563:
 
|}
 
|}
  
 +
We use <math display="inline">\mathrm{MSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})</math> to find the optimal values of <math display="inline">k</math> and <math display="inline">d</math>, where <math display="inline">\mathrm{MSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})</math> is the mean square error. Let <math display="inline">d</math> fixed, the optimal value of the <math display="inline">k</math> can be obtained by minimizing the following statement
  
we use <math display="inline">\mathrm{MSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})</math> to find the optimal values of <math display="inline">k</math> and <math display="inline">d</math>, where <math display="inline">\mathrm{MSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})</math> is the<br/>mean square error. Let <math display="inline">d</math> fixed, the optimal value of the <math display="inline">k</math> can be obtained by minimizing the following statement
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
Line 354: Line 570:
 
|}
 
|}
  
 
+
Notice that <math display="inline">\mathrm{MSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})=\mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )=\mathrm{tr}\,[\mathrm{MSEM}\,(\tilde{\gamma }(k,d),\gamma )].</math> Therefore, getting <math display="inline">\displaystyle\frac{\partial \mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )}{\partial k}=</math><math>0,</math> we obtain <math display="inline">k=\displaystyle\frac{{\sigma }^{2}\left( {\lambda }_{i}+d\right) -{\lambda }_{i}{\gamma }_{i}^{2}(1-d)}{{\gamma }_{i}^{2}\left( {\lambda }_{j}+1\right) }.</math> Since the optimal <math display="inline">k</math> depends on unknown <math display="inline">\gamma</math>  and <math display="inline">{\sigma }^{2},</math> then according Hoerl and Kennard [9] we can get the estimate of <math display="inline">k</math> by substituting <math display="inline">\tilde{\gamma }</math> and <math display="inline">{\tilde{\sigma }}^{2}</math> as follows:
notice that <math display="inline">\mathrm{MSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d),\mathit{\boldsymbol{\beta }})=\mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )=\mathrm{tr}\,[\mathrm{MSEM}\,(\tilde{\gamma }(k,d),\gamma )].</math> Therefore
+
 
+
Getting <math display="inline">\frac{\partial \mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )}{\partial k}=</math><math>0,</math> we obtain <math display="inline">k=\frac{{\sigma }^{2}\left( {\lambda }_{i}+d\right) -{\lambda }_{i}{\gamma }_{i}^{2}(1-d)}{{\gamma }_{i}^{2}\left( {\lambda }_{j}+1\right) }.</math> Since the optimal <math display="inline">k</math>
+
 
+
depends on unknown <math display="inline">\gamma</math>  and <math display="inline">{\sigma }^{2},</math> then according Hoerl and Kennard [9] we can get the estimate of <math display="inline">k</math> by substituting <math display="inline">\tilde{\gamma }</math> and <math display="inline">{\tilde{\sigma }}^{2}</math> as follows:
+
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 366: Line 577:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">\tilde{k}=\frac{{\tilde{\sigma }}^{2}\left( {\lambda }_{i}+d\right) -{\lambda }_{i}{\tilde{\gamma }}_{i}^{2}(1-d)}{{\tilde{\gamma }}_{i}^{2}\left( {\lambda }_{i}+1\right) }</math>
+
| <math display="inline">\tilde{k}=\displaystyle\frac{{\tilde{\sigma }}^{2}\left( {\lambda }_{i}+d\right) -{\lambda }_{i}{\tilde{\gamma }}_{i}^{2}(1-d)}{{\tilde{\gamma }}_{i}^{2}\left( {\lambda }_{i}+1\right) }</math>
 
|}
 
|}
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (27)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (27)
 
|}
 
|}
  
 +
where <math display="inline">\tilde{\gamma }</math> and <math display="inline">{\tilde{\sigma }}^{2}</math> are the unbiased estimators <math display="inline">\mathit{\boldsymbol{\gamma }}</math> and <math display="inline">{\sigma }^{2}</math>. According to the estimator of <math display="inline">k,</math> which Kibria [25] and Hoerl and Kennard [9] proposed, the harmonic mean value of <math display="inline">k</math> in (27) is
  
Where <math display="inline">\tilde{\gamma }</math> and <math display="inline">{\tilde{\sigma }}^{2}</math> are the unbiased estimators <math display="inline">\mathit{\boldsymbol{\gamma }}</math> and <math display="inline">{\sigma }^{2}</math>. According to the estimator of <math display="inline">k,</math> which<br/>Kibria [25]and Hoerl and Kennard [9] proposed, the harmonic mean value of <math display="inline">k</math> in (27) is
+
Now, let <math display="inline">k</math> fixed, and we get the optimal value <math display="inline">d</math> by minimizing <math display="inline">\mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )</math>. Therefore getting <math display="inline">\displaystyle\frac{\partial \mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )}{\partial d}=</math><math>0,</math>
 
+
Now, let <math display="inline">k</math> fixed, and we get the optimal value <math display="inline">d</math> by minimizing <math display="inline">\mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )</math>. Therefore getting <math display="inline">\frac{\partial \mathrm{MSE}\,(\tilde{\gamma }(k,d),\gamma )}{\partial d}=</math><math>0,</math>
+
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 381: Line 591:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">{\tilde{d}}_{opt}=\frac{\sum _{i=1}^{p}\frac{[\left( \left( k+1\right) {\lambda }_{i}+k\right) {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}-\, {\lambda }_{i}^{2}{\tilde{\sigma }}^{2}\, ]}{[{(\, {\lambda }_{i}+k)}^{2}\ast {({\lambda }_{i}\, +1)}^{2}]}}{\sum _{i=1}^{p}\frac{[\left( {\tilde{\sigma }}^{2}\, +\, \, {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) {\lambda }_{i}]}{[{({\lambda }_{i}+k)}^{2}\ast {({\lambda }_{i}+1)}^{2}]}}</math>
+
| <math display="inline">{\tilde{d}}_{opt}=\displaystyle\frac{\sum _{i=1}^{p}\displaystyle\frac{[\left( \left( k+1\right) {\lambda }_{i}+k\right) {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}-\, {\lambda }_{i}^{2}{\tilde{\sigma }}^{2}\, ]}{[{(\, {\lambda }_{i}+k)}^{2}\ast {({\lambda }_{i}\, +1)}^{2}]}}{\sum _{i=1}^{p}\displaystyle\frac{[\left( {\tilde{\sigma }}^{2}\, +\, \, {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) {\lambda }_{i}]}{[{({\lambda }_{i}+k)}^{2}\ast {({\lambda }_{i}+1)}^{2}]}}</math>
 
|}
 
|}
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (28)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (28)
Line 387: Line 597:
  
  
<br/>Since <math display="inline">k</math> is always be positive, in this section we get the positive condition of the estimator in equation (27). For this purpose, we use the following theorem.<br/>Theorem 5.1 If
+
Since <math display="inline">k</math> is always be positive, in this section we get the positive condition of the estimator in Eq. (27). For this purpose, we use the following theorem.
 
+
<math display="inline">\tilde{d}>max\left\{ \frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) }\right\} ,</math>
+
 
+
for all <math display="inline">i,</math> then <math display="inline">{\tilde{k}}_{HM}</math> are always positive.<br/>Proof. If
+
 
+
<math display="inline">\frac{{\sigma }^{2}\left( {\lambda }_{i}+d\right) -{\lambda }_{i}{\gamma }_{i}^{2}(1-d)}{{\gamma }_{i}^{2}\left( {\lambda }_{i}+1\right) }>0</math>
+
 
+
then the values of <math display="inline">k</math> are positive. Since
+
 
+
<math display="inline">{\gamma }_{i}^{2}\left( {\lambda }_{i}+1\right) >0</math><br/> <math display="inline">{\sigma }^{2}\left( {\lambda }_{i}+\right. </math><math>\left. d\right) -{\lambda }_{i}{\gamma }_{i}^{2}(1-d)</math>
+
 
+
must be positive for all <math display="inline">i</math>. Then we get
+
 
+
<math display="inline">d>\frac{1-{\sigma }^{2}/{\gamma }_{i}^{2}}{1+{\sigma }^{2}/\left( {\lambda }_{i}{\gamma }_{i}^{2}\right) }</math>
+
 
+
and because<br/>depends on the unknown parameters <math display="inline">{\gamma }_{i}^{2}</math> and <math display="inline">{\sigma }^{2},</math> so their unbiased estimators are replaced. Therefore <math display="inline">{\tilde{k}}_{HM}</math> is always positive if <math display="inline">\tilde{d}</math> is selected as
+
  
<math display="inline">\tilde{d}>max\left\{ \frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{j}{\tilde{\gamma }}_{i}^{2}\right) }\right\} .</math>
+
'''Theorem 5.1'''
  
<br/>Note that
+
If <math display="inline">\tilde{d}>max\left\{ \displaystyle\frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) }\right\}</math>,
 +
for all <math display="inline">i,</math> then <math display="inline">{\tilde{k}}_{HM}</math> are always positive.
  
<math display="inline">\frac{1-{\tilde{\sigma }}^{2}/{\gamma }_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) }</math>
+
'''Proof.'''
  
is always less than one and since <math display="inline">d</math> must be between zero and one, we consider the inequality
+
If <math display="inline">\displaystyle\frac{{\sigma }^{2}\left( {\lambda }_{i}+d\right) -{\lambda }_{i}{\gamma }_{i}^{2}(1-d)}{{\gamma }_{i}^{2}\left( {\lambda }_{i}+1\right) }>0</math>, then the values of <math display="inline">k</math> are positive. Since <math display="inline">{\gamma }_{i}^{2}\left( {\lambda }_{i}+1\right) >0</math> and <math display="inline">{\sigma }^{2}\left( {\lambda }_{i}+\right. </math><math>\left. d\right) -{\lambda }_{i}{\gamma }_{i}^{2}(1-d)</math> must be positive for all <math display="inline">i</math>. Then we get
 +
<math display="inline">d>\displaystyle\frac{1-{\sigma }^{2}/{\gamma }_{i}^{2}}{1+{\sigma }^{2}/\left( {\lambda }_{i}{\gamma }_{i}^{2}\right) }</math> and because depends on the unknown parameters <math display="inline">{\gamma }_{i}^{2}</math> and <math display="inline">{\sigma }^{2},</math> so their unbiased estimators are replaced. Therefore <math display="inline">{\tilde{k}}_{HM}</math> is always positive if <math display="inline">\tilde{d}</math> is selected as <math display="inline">\tilde{d}>max\left\{ \displaystyle\frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{j}{\tilde{\gamma }}_{i}^{2}\right) }\right\}</math>.
  
<math display="inline">\tilde{d}>max\left\{ \frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) }\right\}</math>
+
Note that <math display="inline">\displaystyle\frac{1-{\tilde{\sigma }}^{2}/{\gamma }_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) }</math> is always less than one and since <math display="inline">d</math> must be between zero and one, we consider the inequality <math display="inline">\tilde{d}>max\left\{\displaystyle \frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) }\right\}</math> as follows
 
+
as follows
+
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 422: Line 616:
 
{| style="text-align: center; margin:auto;"  
 
{| style="text-align: center; margin:auto;"  
 
|-
 
|-
| <math display="inline">max\left\{ \frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) },o\right\} <\tilde{d}<1</math>
+
| <math display="inline">max\left\{\displaystyle \frac{1-{\tilde{\sigma }}^{2}{\tilde{\gamma }}_{i}^{2}}{1+{\tilde{\sigma }}^{2}/\left( {\lambda }_{i}{\tilde{\gamma }}_{i}^{2}\right) },o\right\} <\tilde{d}<1</math>
 
|}
 
|}
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (29)
 
| style="width: 5px;text-align: right;white-space: nowrap;" | (29)
Line 428: Line 622:
  
  
Since <math display="inline">{\tilde{d}}_{opt~}</math> in (28) depends on <math display="inline">k</math> and the estimators of <math display="inline">k</math> in <math display="inline">{\tilde{k}}_{HM}</math> depend on <math display="inline">d</math>, we consider an iterative method for these parameters by applying the following method. Step <math display="inline">1,</math> calculate <math display="inline">\tilde{d}</math> from (29) . Step 2, calculate <math display="inline">{\tilde{k}}_{HM}</math> by using <math display="inline">\tilde{d}</math> in step 1 . Step <math display="inline">3,</math> calculate <math display="inline">{\tilde{d}}_{opt~}</math> from ( 28 ) by using the estimator <math display="inline">{\tilde{k}}_{HM}</math> in Step 2 . Step <math display="inline">4,</math> if <math display="inline">{\tilde{d}}_{opt~}</math> is not between zero and one use <math display="inline">{\tilde{d}}_{opt~}=</math><math>\tilde{d}</math>
+
Since <math display="inline">{\tilde{d}}_{opt~}</math> in (28) depends on <math display="inline">k</math> and the estimators of <math display="inline">k</math> in <math display="inline">{\tilde{k}}_{HM}</math> depend on <math display="inline">d</math>, we consider an iterative method for these parameters by applying the following method. Step <math display="inline">1,</math> calculate <math display="inline">\tilde{d}</math> from Eq. (29). Step 2, calculate <math display="inline">{\tilde{k}}_{HM}</math> by using <math display="inline">\tilde{d}</math> in step 1 . Step <math display="inline">3,</math> calculate <math display="inline">{\tilde{d}}_{opt~}</math> from Eq. (28) by using the estimator <math display="inline">{\tilde{k}}_{HM}</math> in Step 2 . Step <math display="inline">4,</math> if <math display="inline">{\tilde{d}}_{opt~}</math> is not between zero and one use <math display="inline">{\tilde{d}}_{opt~}=</math><math>\tilde{d}</math>
  
==6. A Simulation Study==
+
==6. A simulation study==
  
<br/>In this section, we compare the performance of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}},\tilde{\mathit{\boldsymbol{\beta }}}(k,d),{\tilde{\mathit{\boldsymbol{\beta }}}}_{t}</math> and <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> with a simulation<br/>study. For this purpose, we calculate the estimated mean square error (EMSE) with various values of sample size, variance and degree of collinearity. Following McDonald and Galarneau [27], we are computed the fixed effects as
+
In this section, we compare the performance of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}},\tilde{\mathit{\boldsymbol{\beta }}}(k,d),{\tilde{\mathit{\boldsymbol{\beta }}}}_{t}</math> and <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> with a simulation study. For this purpose, we calculate the estimated mean square error (EMSE) with various values of sample size, variance and degree of collinearity. Following McDonald and Galarneau [27], we are computed the fixed effects as
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 444: Line 638:
 
|}
 
|}
  
 
+
where <math display="inline">{w}_{ijc}</math> independent standard normal are pseudo-random numbers and <math display="inline">{\rho }^{2}</math> is the correlation between any two fixed effects. Three different sets of <math display="inline">{\rho }^{2}</math> were considered as 0.75,0.85 and <math display="inline">0.95.</math> The <math display="inline">\mathit{\boldsymbol{Z}}</math> matrix is produced in a completely randomized design. Observation on responses are then determined by
Where <math display="inline">{w}_{ijc}</math> independent standard normal are pseudo-random numbers and <math display="inline">{\rho }^{2}</math> is the correlation between any two fixed effects. Three different sets of <math display="inline">{\rho }^{2}</math> were considered as 0.75,0.85 and <math display="inline">0.95.</math> The <math display="inline">\mathit{\boldsymbol{Z}}</math> matrix is produced in a completely randomized design. Observation on responses are then determined by
+
  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
Line 458: Line 651:
  
  
We consider two designs that in the first design <math display="inline">{n}_{i}=</math><math>3</math> and in the second design <math display="inline">{n}_{i}=7</math>. Also, the same value <math display="inline">t=</math><math>9,p=4</math> and <math display="inline">q=9</math> are taken in both designs. Following Ozcale and Can (2017) “The <math display="inline">\mathit{\boldsymbol{\beta }}</math> vector was chosen as the eigenvector corresponding to the largest eigenvalue of the <math display="inline">{\boldsymbol{X}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}</math> matrix”. The variances matrix of random effects <math display="inline">{u}_{i}</math> and <math display="inline">{\epsilon }_{ij}</math> are <math display="inline">\mathit{\boldsymbol{G}}=</math><math>{\sigma }_{1}^{2}{\mathit{\boldsymbol{I}}}_{q}</math> and <math display="inline">\mathit{\boldsymbol{W}}=</math><math>{\sigma }^{2}{\mathit{\boldsymbol{I}}}_{a}</math><br/>respectively. They <math display="inline">{u}_{i}</math> are generated from the normal distribution <math display="inline">N(0,G)</math>. We are considered <math display="inline">{\sigma }^{2}=</math><math>0.5,1</math> and <math display="inline">{\sigma }_{1}^{2}=0.5,1</math>.The trial was replicated 1000 times by generating <math display="inline">{u}_{i}</math> and <math display="inline">{\epsilon }_{ij}.</math> For each simulated data set derived <math display="inline">{\tilde{k}}_{HM}</math> and <math display="inline">{\tilde{d}}_{opt~},</math> and then the estimated mean squared error (EMSE) calculated as calculated the relative mean square (RMSE) as
+
We consider two designs that in the first design <math display="inline">{n}_{i}=</math><math>3</math> and in the second design <math display="inline">{n}_{i}=7</math>. Also, the same value <math display="inline">t=</math><math>9,p=4</math> and <math display="inline">q=9</math> are taken in both designs. Following Ozcale and Can [18], the <math display="inline">\mathit{\boldsymbol{\beta }}</math> vector was chosen as the eigenvector corresponding to the largest eigenvalue of the <math display="inline">{\boldsymbol{X}}^{'}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}}</math> matrix”. The variances matrix of random effects <math display="inline">{u}_{i}</math> and <math display="inline">{\epsilon }_{ij}</math> are <math display="inline">\mathit{\boldsymbol{G}}=</math><math>{\sigma }_{1}^{2}{\mathit{\boldsymbol{I}}}_{q}</math> and <math display="inline">\mathit{\boldsymbol{W}}=</math><math>{\sigma }^{2}{\mathit{\boldsymbol{I}}}_{a}</math>, respectively. They <math display="inline">{u}_{i}</math> are generated from the normal distribution <math display="inline">N(0,G)</math>. We are considered <math display="inline">{\sigma }^{2}=</math><math>0.5,1</math> and <math display="inline">{\sigma }_{1}^{2}=0.5,1</math>.The trial was replicated 1000 times by generating <math display="inline">{u}_{i}</math> and <math display="inline">{\epsilon }_{ij}.</math> For each simulated data set derived <math display="inline">{\tilde{k}}_{HM}</math> and <math display="inline">{\tilde{d}}_{opt~},</math> and then the estimated mean squared error (EMSE) calculated as calculated the relative mean square (RMSE) as
 +
 
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
{| class="formulaSCP" style="width: 100%; text-align: center;"  
 
|-
 
|-
| <math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}:{\tilde{\mathit{\boldsymbol{\beta }}}}^{\ast }\right) =</math><math>\frac{\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})}{\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}^{\ast }\right) }<br/></math>
+
| <math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}:{\tilde{\mathit{\boldsymbol{\beta }}}}^{\ast }\right) =</math><math>\displaystyle\frac{\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})}{\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}^{\ast }\right) }</math>
 
|}
 
|}
when <math display="inline">RMSE</math> is greater than one, it indicates that the estimator <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}^{\ast }</math> superior to the estimator <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. For the stochastic linear restriction <math display="inline">\mathit{\boldsymbol{r}}=</math><math>\mathit{\boldsymbol{R\beta }}+\Phi ,</math> the matrix <math display="inline">\mathit{\boldsymbol{R}}</math> is <math display="inline">m\times p</math> and generated from the normal distribution <math display="inline">N(0,1)</math> and the matrix <math display="inline">\Phi</math>  is generated from the normal distribution <math display="inline">N(0,\mathit{\boldsymbol{V}})</math> where <math display="inline">\mathit{\boldsymbol{V}}</math> is taken as <math display="inline">=</math><math>\mathrm{diag}\,\left( {\sigma }^{2}{\mathit{\boldsymbol{I}}}_{m}\right.</math>  ) with <math display="inline">m=</math><math>2</math>. In tables 1 , we obtain the values of EMSE and RMSE for <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}},\tilde{\mathit{\boldsymbol{\beta }}}(k,d),{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math> and <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d).</math> We have the following results for Table 1:<br/>(i) In the whole table, the EMSE values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is less than <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. Also, the EMSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> is less than <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math>. In general, the EMSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> is less than all estimators.<br/>(ii) As <math display="inline">{\rho }^{2}</math> and <math display="inline">{\sigma }^{2}</math> increase, the EMSE values of the estimators increase.<br/>(iii) As <math display="inline">{\rho }^{2}</math> increases, difference between the EMSE values of the two parameter estimators and the EMSE values of the best linear unbiased estimators increase. This implies an increase in the improvement of the two-parameter estimators.
 
  
Table1, Estimated <math display="inline">MSE</math> and SMSE values with <math display="inline">{\tilde{k}}_{HM},{\tilde{d}}_{opt~}</math> and <math display="inline">t=</math><math>9</math>
+
when <math display="inline">RMSE</math> is greater than one, it indicates that the estimator <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}^{\ast }</math> superior to the estimator <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. For the stochastic linear restriction <math display="inline">\mathit{\boldsymbol{r}}=</math><math>\mathit{\boldsymbol{R\beta }}+\Phi ,</math> the matrix <math display="inline">\mathit{\boldsymbol{R}}</math> is <math display="inline">m\times p</math> and generated from the normal distribution <math display="inline">N(0,1)</math> and the matrix <math display="inline">\Phi</math>  is generated from the normal distribution <math display="inline">N(0,\mathit{\boldsymbol{V}})</math> where <math display="inline">\mathit{\boldsymbol{V}}</math> is taken as <math display="inline">=</math><math>\mathrm{diag}\,\left( {\sigma }^{2}{\mathit{\boldsymbol{I}}}_{m}\right)</math>) with <math display="inline">m=</math><math>2</math>. In [[#tab-1|Table 1]], we obtain the values of EMSE and RMSE for <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}},\tilde{\mathit{\boldsymbol{\beta }}}(k,d),{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math> and <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math>. We have the following results for [[#tab-1|Table 1]]:
  
{| style="width: 100%;border-collapse: collapse;"
+
(i) In the whole table, the EMSE values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is less than <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. Also, the EMSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> is less than <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math>. In general, the EMSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> is less than all estimators.  
|-
+
 
|  style="border: 1pt solid black;vertical-align: top;"|p=0.75
+
(ii) As <math display="inline">{\rho }^{2}</math> and <math display="inline">{\sigma }^{2}</math> increase, the EMSE values of the estimators increase.
|  colspan='2'  style="border: 1pt solid black;vertical-align: top;"|<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(0.5,0.5)</math>
+
 
|  colspan='2'  style="border: 1pt solid black;vertical-align: top;"|<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(1,1)</math>
+
(iii) As <math display="inline">{\rho }^{2}</math> increases, difference between the EMSE values of the two parameter estimators and the EMSE values of the best linear unbiased estimators increase. This implies an increase in the improvement of the two-parameter estimators.
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|ni
+
|  style="border: 1pt solid black;vertical-align: top;"|3
+
|  style="border: 1pt solid black;vertical-align: top;"|7
+
|  style="border: 1pt solid black;vertical-align: top;"|3
+
|  style="border: 1pt solid black;vertical-align: top;"|7
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001526      </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">7.3974e-05</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0003053        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001479</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,\left( {\tilde{{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{r}}}}}_{\, }\right)</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001461      </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">7.1668e-05</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002923        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001433</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001032      </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">6.9166e-05</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001912          </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001332</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right)</math>  
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">9.9335e-05      </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">6.7089e-05</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001845        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001293</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}:\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.4795            </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0695</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.5965                </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1104</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}:{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)\right)</math>  
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.4716            </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0682</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.5845</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1083</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}(k,d):{\tilde{\mathit{\boldsymbol{\beta }}}}_{\mathit{\boldsymbol{c}}}(k,d)\right)</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0389            </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0309</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0363                </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0301</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|p=0.85
+
|  colspan='2'  style="border: 1pt solid black;vertical-align: top;"|<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(0.5,0.5)</math>
+
|  colspan='2'  style="border: 1pt solid black;vertical-align: top;"|<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(1,1)</math>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|ni
+
|  style="border: 1pt solid black;vertical-align: top;"|3
+
|  style="border: 1pt solid black;vertical-align: top;"|7
+
|  style="border: 1pt solid black;vertical-align: top;"|3
+
|  style="border: 1pt solid black;vertical-align: top;"|7
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002198</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0003460</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0004396        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002387</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,\left( {\tilde{{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{r}}}}}_{\, }\right)</math>  
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002076        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001134</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0004153        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002269</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001332        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001045</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002562        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002111</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right)</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0001277        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002460</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002459        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002014</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}:\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.6495                </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1414</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.7156            </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1308</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}:{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)\right)</math>  
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.6263                </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1373</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.6889            </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1266</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}(k,d):{\tilde{\mathit{\boldsymbol{\beta }}}}_{\mathit{\boldsymbol{c}}}(k,d)\right)</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0430                </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1219</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0418</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0481</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|p=0.95
+
|  colspan='2'  style="border: 1pt solid black;vertical-align: top;"|<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(0.5,0.5)</math>
+
|  colspan='2'  style="border: 1pt solid black;vertical-align: top;"|<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(1,1)</math>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|ni
+
|  style="border: 1pt solid black;vertical-align: top;"|3
+
|  style="border: 1pt solid black;vertical-align: top;"|7
+
|  style="border: 1pt solid black;vertical-align: top;"|3
+
|  style="border: 1pt solid black;vertical-align: top;"|7
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0005729</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0003460</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0011459          </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0006920</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,\left( {\tilde{{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{r}}}}}_{\, }\right)</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0005033        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0003044</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0010066          </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0006088</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002762        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002760</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0005201          </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0005272</span>
+
|-
+
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right)</math>  
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002578        </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0002460</span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0004927          </span>
+
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">0.0004716</span>
+
  
 +
<div class="center" style="font-size: 75%;">'''Table 1'''. Estimated <math display="inline">MSE</math> and SMSE values with <math display="inline">{\tilde{k}}_{HM},{\tilde{d}}_{opt~}</math> and <math display="inline">t=</math><math>9</math></div>
  
|-
+
<div id='tab-1'></div>
| style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}:\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
+
{| class="wikitable" style="margin: 1em auto 0.1em auto;border-collapse: collapse;font-size:85%;width:auto;"  
| style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">2.0743              </span>
+
|-style="text-align:center"
style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.2533</span>
+
! <math>p=0.75</math> !!  colspan='2'  |<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(0.5,0.5)</math> !!  colspan='2'  | <math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(1,1)</math>
style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">2.2030                </span>
+
|-style="text-align:center"
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.3125</span>
+
| ni
|-
+
|  3
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}:{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)\right)</math>  
+
|  7
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.9524              </span>
+
|  3
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.2369</span>
+
|  7
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">2.0429                </span>
+
|-style="text-align:center"
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.2908</span>
+
|  <math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})</math>
|-
+
0.0001526     
|  style="border: 1pt solid black;vertical-align: top;"|<math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}(k,d):{\tilde{\mathit{\boldsymbol{\beta }}}}_{\mathit{\boldsymbol{c}}}(k,d)\right)</math>  
+
|  7.3974e-05
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0713              </span>
+
| 0.0003053       
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1219</span>
+
|  0.0001479
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.0556                </span>
+
|-style="text-align:center"
|  style="border: 1pt solid black;vertical-align: top;"|<span style="text-align: center; font-size: 75%;">1.1178</span>
+
|  <math>\mathrm{EMSE}\,\left( {\tilde{{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{r}}}}}_{\, }\right)</math>  
 +
0.0001461     
 +
|  7.1668e-05
 +
| 0.0002923       
 +
|  0.0001433
 +
|-style="text-align:center"
 +
|  <math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
 +
0.0001032     
 +
|  6.9166e-05
 +
|  0.0001912         
 +
|  0.0001332
 +
|-style="text-align:center"
 +
| <math>\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right)</math>
 +
|  9.9335e-05     
 +
|  6.7089e-05
 +
|  0.0001845       
 +
|  0.0001293
 +
|-style="text-align:center"
 +
|  <math>\mathrm{RMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}:\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
 +
| 1.4795           
 +
1.0695
 +
|  1.5965               
 +
|  1.1104
 +
|-style="text-align:center"
 +
| <math>\mathrm{RMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}:{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)\right)</math>  
 +
1.4716           
 +
|  1.0682
 +
|  1.5845
 +
|  1.1083
 +
|-style="text-align:center"
 +
| <math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}(k,d):{\tilde{\mathit{\boldsymbol{\beta }}}}_{\mathit{\boldsymbol{c}}}(k,d)\right)</math>
 +
|  1.0389           
 +
|  1.0309
 +
|  1.0363               
 +
|  1.0301
 +
|-style="text-align:center"
 +
|  <math>p=0.85</math>
 +
|  colspan='2'  |<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(0.5,0.5)</math>
 +
colspan='2'  |<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(1,1)</math>
 +
|-style="text-align:center"
 +
| ni
 +
|  3
 +
|  7
 +
|  3
 +
|  7
 +
|-style="text-align:center"
 +
|  <math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})</math>
 +
0.0002198
 +
|  0.0003460
 +
|  0.0004396       
 +
|  0.0002387
 +
|-style="text-align:center"
 +
| <math>\mathrm{EMSE}\,\left( {\tilde{{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{r}}}}}_{\, }\right)</math>
 +
|  0.0002076       
 +
|  0.0001134
 +
|  0.0004153       
 +
|  0.0002269
 +
|-style="text-align:center"
 +
|  <math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
 +
0.0001332       
 +
|  0.0001045
 +
|  0.0002562       
 +
|  0.0002111
 +
|-style="text-align:center"
 +
| <math>\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right)</math>
 +
|  0.0001277       
 +
|  0.0002460
 +
|  0.0002459       
 +
|  0.0002014
 +
|-style="text-align:center"
 +
|  <math>\mathrm{RMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}:\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
 +
1.6495               
 +
|  1.1414
 +
|  1.7156           
 +
|  1.1308
 +
|-style="text-align:center"
 +
|  <math>\mathrm{RMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}:{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)\right)</math>  
 +
| 1.6263               
 +
1.1373
 +
|  1.6889           
 +
|  1.1266
 +
|-style="text-align:center"
 +
| <math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}(k,d):{\tilde{\mathit{\boldsymbol{\beta }}}}_{\mathit{\boldsymbol{c}}}(k,d)\right)</math>  
 +
1.0430               
 +
|  1.1219
 +
|  1.0418
 +
|  1.0481
 +
|-style="text-align:center"
 +
| <math>p=0.95</math>
 +
|  colspan='2'  |<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(0.5,0.5)</math>
 +
|  colspan='2'  |<math>\left( {\sigma }^{2},{\sigma }_{1}^{2}\right) =(1,1)</math>
 +
|-style="text-align:center"
 +
|  ni
 +
|  3
 +
|  7
 +
|  3
 +
|  7
 +
|-style="text-align:center"
 +
|  <math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}})</math>
 +
0.0005729
 +
|  0.0003460
 +
|  0.0011459         
 +
|  0.0006920
 +
|-style="text-align:center"
 +
| <math>\mathrm{EMSE}\,\left( {\tilde{{\mathit{\boldsymbol{\beta }}}_{\mathit{\boldsymbol{r}}}}}_{\, }\right)</math>
 +
|  0.0005033       
 +
|  0.0003044
 +
|  0.0010066         
 +
|  0.0006088
 +
|-style="text-align:center"
 +
|  <math>\mathrm{EMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
 +
0.0002762       
 +
|  0.0002760
 +
|  0.0005201         
 +
|  0.0005272
 +
|-style="text-align:center"
 +
| <math>\mathrm{EMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)\right)</math>
 +
|  0.0002578       
 +
|  0.0002460
 +
|  0.0004927         
 +
|  0.0004716
 +
|-style="text-align:center"
 +
|  <math>\mathrm{RMSE}\,(\tilde{\mathit{\boldsymbol{\beta }}}:\tilde{\mathit{\boldsymbol{\beta }}}(k,d))</math>
 +
2.0743             
 +
|  1.2533
 +
|  2.2030               
 +
|  1.3125
 +
|-style="text-align:center"
 +
| <math>\mathrm{RMSE}\,\left( {\tilde{\mathit{\boldsymbol{\beta }}}}_{r}:{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)\right)</math>
 +
|  1.9524             
 +
|  1.2369
 +
|  2.0429               
 +
|  1.2908
 +
|-style="text-align:center"
 +
|  <math>\mathrm{RMSE}\,\left( \tilde{\mathit{\boldsymbol{\beta }}}(k,d):{\tilde{\mathit{\boldsymbol{\beta }}}}_{\mathit{\boldsymbol{c}}}(k,d)\right)</math>  
 +
|  1.0713             
 +
|  1.1219
 +
|  1.0556               
 +
|  1.1178
 
|}
 
|}
 
  
 
==7. Real data analysis==
 
==7. Real data analysis==
  
<br/>We consider a data set, which is known as the Egyptian pottery data to show the behavior of the new Restricted and Unrestricted Two-Parameter Estimators. This data set arises from an extensive archaeological survey of pottery production and distribution in the ancient Egyptian city of Al-Amarna. The data consist of measurements of chemical contents (mineral elements) made on many samples of pottery using two different techniques, NAA and ICP (see Smith et<br/>al [30] for description of techniques). The set of pottery was collected from different locations around the city. We fit the data set by linear mixed model as <math display="inline">\mathit{\boldsymbol{y}}=</math><math>\mathit{\boldsymbol{X\beta }}+\mathit{\boldsymbol{Zu}}+\mathit{\boldsymbol{\epsilon }},</math> where <math display="inline">\mathit{\boldsymbol{y}}</math> is <math display="inline">159\times 1</math> vector<br/>of response variables, <math display="inline">\mathit{\boldsymbol{X}}</math> and <math display="inline">\mathit{\boldsymbol{Z}}</math> which are regression matrix with dimensions <math display="inline">159\times 6</math> and <math display="inline">159\times 25</math> respectively. First, we are estimated the variance components by consider <math display="inline">{\sigma }_{1}^{2}=</math><math>0.5</math> and <math display="inline">{\sigma }^{2}=0.5.</math> Then, by calculating the eigenvalues of <math display="inline">\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}},</math> the condition number 8322860 is obtained, which indicate severe multicollinearity. We considered the stochastic linear restrictions as <math display="inline">\mathit{\boldsymbol{r}}=</math><math>\mathit{\boldsymbol{R\beta }}+\Phi ,\Phi \sim N\left( 0,{\sigma }^{2}{\mathit{\boldsymbol{I}}}_{3}\right)</math>  and selected 3 available data in the previous sections to the Egyptian pottery data. <math display="inline">{\tilde{k}}_{HM}</math> and <math display="inline">{\tilde{d}}_{opt}</math> are obtained using the iterative method introduced at the end of section 5,0.348 and 0.373 respectively. In Table 2 the estimated <math display="inline">MSE</math> values of the estimators are obtained by replacing in the corresponding theoretical MSE equations. We can see the estimated MSE values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is less than <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. Also, the estimated MSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)</math> is less than <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math>. In general, the estimated MSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> is less than all estimators. So, we conclude that the stochastic restricted two parameter estimator performs better than the other estimators. Note that in the results obtained for this data, all eigenvalues of <math display="inline">{\Delta }_{1}</math> and <math display="inline">{\Delta }_{2}</math> are positive and the condition of Theorem 4.1 and Theorem 4.2 are true. In Figure 1 , a plot of the estimated MSE values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> against <math display="inline">k</math> in the interval [0,2] with fixed <math display="inline">{\tilde{d}}_{opt~}=</math><math>0.373</math> is drawn. Because <math display="inline">\tilde{\beta }</math> is not dependent on <math display="inline">k</math>, its estimated <math display="inline">MSE</math> value is the same for all <math display="inline">k</math> values. It is obvious that estimated <math display="inline">MSE</math> values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is always less than <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. Altogether, it is obvious that the two parameter estimators can perform better than the <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> in MSEM criterion under conditions.
+
We consider a data set, which is known as the Egyptian pottery data to show the behavior of the new Restricted and Unrestricted Two-Parameter Estimators. This data set arises from an extensive archaeological survey of pottery production and distribution in the ancient Egyptian city of Al-Amarna. The data consist of measurements of chemical contents (mineral elements) made on many samples of pottery using two different techniques, NAA and ICP (see Smith et al. [30] for description of techniques). The set of pottery was collected from different locations around the city. We fit the data set by linear mixed model as <math display="inline">\mathit{\boldsymbol{y}}=</math><math>\mathit{\boldsymbol{X\beta }}+\mathit{\boldsymbol{Zu}}+\mathit{\boldsymbol{\epsilon }},</math> where <math display="inline">\mathit{\boldsymbol{y}}</math> is <math display="inline">159\times 1</math> vector of response variables, <math display="inline">\mathit{\boldsymbol{X}}</math> and <math display="inline">\mathit{\boldsymbol{Z}}</math> which are regression matrix with dimensions <math display="inline">159\times 6</math> and <math display="inline">159\times 25</math> respectively. First, we are estimated the variance components by consider <math display="inline">{\sigma }_{1}^{2}=</math><math>0.5</math> and <math display="inline">{\sigma }^{2}=0.5.</math> Then, by calculating the eigenvalues of <math display="inline">\mathit{\boldsymbol{X}}{\mathit{\boldsymbol{H}}}^{-1}\mathit{\boldsymbol{X}},</math> the condition number 8322860 is obtained, which indicate severe multicollinearity. We considered the stochastic linear restrictions as <math display="inline">\mathit{\boldsymbol{r}}=</math><math>\mathit{\boldsymbol{R\beta }}+\Phi ,\Phi \sim N\left( 0,{\sigma }^{2}{\mathit{\boldsymbol{I}}}_{3}\right)</math>  and selected 3 available data in the previous sections to the Egyptian pottery data. <math display="inline">{\tilde{k}}_{HM}</math> and <math display="inline">{\tilde{d}}_{opt}</math> are obtained using the iterative method introduced at the end of section 5,0.348 and 0.373 respectively. In [[#tab-2|Table 2]], the estimated <math display="inline">MSE</math> values of the estimators are obtained by replacing in the corresponding theoretical MSE equations. We can see the estimated MSE values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is less than <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. Also, the estimated MSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{c}(k,d)</math> is less than <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math>. In general, the estimated MSE values of <math display="inline">{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math> is less than all estimators. So, we conclude that the stochastic restricted two parameter estimator performs better than the other estimators. Note that in the results obtained for this data, all eigenvalues of <math display="inline">{\Delta }_{1}</math> and <math display="inline">{\Delta }_{2}</math> are positive and the condition of Theorem 4.1 and Theorem 4.2 are true. In [[#img-1|Figure 1]], a plot of the estimated MSE values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> and <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> against <math display="inline">k</math> in the interval [0,2] with fixed <math display="inline">{\tilde{d}}_{opt~}=</math><math>0.373</math> is drawn. Because <math display="inline">\tilde{\beta }</math> is not dependent on <math display="inline">k</math>, its estimated <math display="inline">MSE</math> value is the same for all <math display="inline">k</math> values. It is obvious that estimated <math display="inline">MSE</math> values of <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> is always less than <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math>. Altogether, it is obvious that the two parameter estimators can perform better than the <math display="inline">\tilde{\mathit{\boldsymbol{\beta }}}</math> in MSEM criterion under conditions.
  
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
+
<div class="center" style="font-size: 75%;">'''Table 2'''. Estimated MSE values of the proposed estimators</div>
Table 2. Estimated MSE values of the proposed estimators.</div>
+
  
{| style="width: 100%;border-collapse: collapse;"
+
<div id='tab-2'></div>
|-
+
{| class="wikitable" style="margin: 1em auto 0.1em auto;border-collapse: collapse;font-size:85%;width:auto;"  
|  style="border: 1pt solid black;vertical-align: top;"|
+
|-style="text-align:center"
| style="border: 1pt solid black;"|<math>\tilde{\mathit{\boldsymbol{\beta }}}{\, }{\, }{\, }</math>
+
! !! <math>\tilde{\mathit{\boldsymbol{\beta }}}{\, }{\, }{\, }</math> !! <math>{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math> !! <math>\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math> !! <math>{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math>
 
+
|-style="text-align:center"
 
+
|  EMSE
|  style="border: 1pt solid black;"|<math>{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}</math>
+
|  1.216069
 
+
|  1.201379
 
+
|  0.1544953
|  style="border: 1pt solid black;"|<math>\tilde{\mathit{\boldsymbol{\beta }}}(k,d)</math>
+
|  0.1541745
 
+
|}
 
+
|  style="border: 1pt solid black;"|<math>{\tilde{\mathit{\boldsymbol{\beta }}}}_{r}(k,d)</math>
+
  
  
 +
<div id='img-1'></div>
 +
{| style="text-align: center; border: 1px solid #BBB; margin: 1em auto; width: auto;max-width: auto;"
 
|-
 
|-
| style="border: 1pt solid black;"|EMSE
+
|style="padding:10px;"| [[Image:Review_312425094190-image1.png|276px]]
| style="border: 1pt solid black;"|1.216069
+
|- style="text-align: center; font-size: 75%;"
style="border: 1pt solid black;"|1.201379
+
| colspan="1" style="padding:10px;"| '''Figure 1'''. The estimated mean square error values of the estimators versus <math display="inline">k</math> with <math display="inline">{\tilde{d}}_{of~}</math>
|  style="border: 1pt solid black;"|0.1544953
+
|  style="border: 1pt solid black;"|0.1541745
+
 
|}
 
|}
 
 
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
 
<span style="text-align: center; font-size: 75%;"> [[Image:Review_312425094190-image1.png|276px]] </span></div>
 
 
<div class="center" style="width: auto; margin-left: auto; margin-right: auto;">
 
Figure <math display="inline">1.</math> The estimated mean square error values of the estimators versus <math display="inline">k</math> with <math display="inline">{\tilde{d}}_{of~}</math></div>
 
  
 
==8. Conclusion==
 
==8. Conclusion==
Line 675: Line 862:
  
 
==Funding==
 
==Funding==
 
+
The research was supported by the Slamic Azad University.
The research was supported by the Payame Noor University.
+
  
 
==References==
 
==References==
  
:1. Zenzile, TG., Community health workers' understanding of their role in rendering Maternal, Child and Women's Health Services. Diss. North-West University, 2018.
+
<div class="auto" style="text-align: left;width: auto; margin-left: auto; margin-right: auto;font-size: 85%;">
 +
 
 +
[1] Zenzile T.G. Community health workers' understanding of their role in rendering maternal, child and  
 +
women's health services. Diss. North-West University, 2018.
 +
 
 +
[2] Fiona S., Dierenfeld E.S., Langley-Evans S.C., Hamilton E., Lark R.M., Yon L., Watts M.J. Potential bio-indicators for assessment of mineral status in elephants. Sci. Rep-UK., 10(1):1-14, 2020.
  
:2. Fiona, S., Dierenfeld, ES., Langley-Evans, SC., Hamilton, E., Lark, RM., Yon, L., Watts, MJ., Potential bio-indicators for assessment of mineral status in elephants. Sci. Rep-UK. '''10'''(1): 1-14, 2020.
+
[3] Rajat K., Guler I., Nerkar A. Entangled decisions: Knowledge interdependencies and terminations of patented inventions in the pharmaceutical industry. Strateg. Manage. J., 39(9):2439-2465, 2018.
  
:3. Rajat, K., Guler, I., Nerkar, A., Entangled decisions: Knowledge interdependencies and terminations of patented inventions in the pharmaceutical industry. Strateg. Manage. J. '''39'''(9): 2439-2465, 2018.
+
[4] Adam, L., Corsi D.J., Venechuk G.E. Schools influence adolescent e-cigarette use, but when? Examining the interdependent association between school context and teen vaping over time. J. Youth. Adolescence, 48(10):1899-1911, 2019.
  
:4. Adam, L., Corsi DJ., Venechuk, GE., Schools influence adolescent e-cigarette use, but when? Examining the interdependent association between school context and teen vaping over time. J. Youth. Adolescence. '''48'''(10): 1899-1911, 2019.
+
[5] Zhiyi C., Zhu S., Niu Q., Zuo T. Knowledge discovery and recommendation with linear mixed model. IEEE Access., 8:38304-38317, 2020.
  
:5. Zhiyi, C., Zhu, S., Niu, Q., Zuo, T., Knowledge discovery and recommendation with linear mixed model. IEEE Access. '''8''': 38304-38317, 2020.
+
[6] Henderson C.R. Estimation of genetic parameters. Ann. Math. Stat., 21:309–310, 1950.
  
:6. Henderson, CR., Estimation of genetic parameters. Ann. Math. Stat. '''21''': 309–310, 1950.
+
[7] Henderson C.R., Searle S.R., VonKrosig  C.N. Estimation of environmental and genetic trends from records subject to culling. Biometrics, 15:192–218, 1959.
  
:7. Henderson, CR., Searle, SR., VonKrosig,  CN., Estimation of environmental and genetic trends from records subject to culling. Biometrics. '''15''': 192–218, 1959.
+
[8] Farebrother R.W. Further results on the mean square error of ridge regression. J. Royal. Stat. Soc., Series B (Methodological), 38(3):248-250, 1976.
  
:8. Farebrother, RW., Further results on the mean square error of ridge regression. J. Royal. Stat. Soc. Series B (Methodological). 38: 248-250, 1976.
+
[9] Liu K. A new class of biased estimate in linear regression. Commun. Statist. Theor. Meth., 22(2):393–402, 1993.
  
:9. Liu, K., A new class of biased estimate in linear regression. Commun. Statist. Theor. Meth. '''22'''(2): 393–402, 1993.
+
[10] Liu X.Q., Hu P. General ridge predictors in a mixed linear model. J. Theor. Appl. Stat., 47:363–378, 2013.
  
:10. Liu, XQ., Hu, P., General ridge predictors in a mixed linear model. J. Theor. Appl. Stat. '''47''': 363–378, 2013.
+
[11] Yang H., Chang X. A new two-parameter estimator in linear regression. Commun. Statist. Theor. Meth., 39:923–934, 2010.
  
:11. Yang, H., Chang, X., A New Two-Parameter Estimator in Linear Regression. Commun. Statist. Theor. Meth.'''39''': 923–934, 2010.
+
[12] Theil H., Goldberger A.S. On pure and mixed statistical estimation in economics. Int. Econ. Rev., 2:65–78, 1961.
  
:12. Theil, H., Goldberger, A.S., On pure and mixed statistical estimation in economics. Int. Econ. Rev. 2: 65–78, 1961.
+
[13] Theil H. On the use of incomplete prior information in regression analysis. J. Amer. Statist. Assoc., 58,401–414, 1963.
  
:13. Theil, H., On the use of incomplete prior information in regression analysis. J. Amer.Statist. Assoc. 58, 401–414, 1963.
+
[14] Gilmour A.R., Cullis B.R., Welham S.J.,  Gogel B.J., Thompson R. An efficient computing strategy for predicting in mixed linear models. Comput. Statist. Data Anal., 44:571–586, 2004.
  
:14. Gilmour, AR., Cullis, BR., Welham, SJ., Gogel, BJ., Thompson, R., An efficient computing strategy for predicting in mixed linear models. Comput. Statist. Data Anal. '''44''': 571–586, 2004.
+
[15] Jiming J., Lahiri P. Mixed model prediction and small area estimation. Test., 15(1):1-96, 2006.
  
:15. Jiming, J., Lahiri, P., Mixed model prediction and small area estimation. Test. '''15'''(1): 1-96, 2006.
+
[16] Patel S.R., Patel N.P. Mixed effect exponential linear model. Commun. Statist. Theor. Meth., 21(9):2721-2740, 1992.
  
:16. Patel, SR.: Patel, NP., Mixed effect exponential linear model. Commun. Statist. Theor. Meth. '''21'''(9): 2721-2740, 1992.
+
[17] Eliot M.N., Ferguson J., Reilly M.P., Foulkes A.S. Ridge regression for longitudinal biomarker data. Int. J. Biostat., 7:1–11, 2011.
  
:17. Eliot, MN., Ferguson, J., Reilly, MP., Foulkes, AS.: Ridge regression for longitudinal      biomarker data. Int. J. Biostat. '''7''':''' '''1–11, 2011.
+
[18] Özkale M.R., Can F. An evaluation of ridge estimator in linear mixed models: an example from kidney failure data. J. Appl. Statist., 44(12):2251–2269, 2017.
  
:18. Ozkale, MR., Can, F., An evaluation of ridge estimator in linear mixed models: an example from kidney failure data. J. Appl. Statist. '''44'''(12): 2251–2269, 2017.
+
[19] Kuran O., Ózkale M.R. Gilmour’s approach to mixed and stochastic restricted ridge predictions in linear mixed models. Linear. Algebra. Appl., 508:22–47, 2016.
  
:19. Kuran, O., Ozkale, MR., Gilmour’s approach to mixed and stochastic restricted ridge predictions in linear mixed models. Linear. Algebra. Appl. '''508''': 22–47, 2016.
+
[20] Hartley H.O., Rao J.N. Maximum-likelihood estimation for the mixed analysis of variance model. Biometrika, 54:93–108, 1967.
  
:20. Hartley, HO., Rao, JN., Maximum-likelihood estimation for the mixed analysis of variance model. Biometrika. '''54''' , 93–108, 1967.
+
[21] Lee Y., Nelder J.A. Generalized linear models for the analysis of quality-improvement experiments. Canad. J. Stat., 26(1):95–105, 1998.
  
:21. Lee, Y., Nelder, JA., Generalized linear models for the analysis of quality -improvement experiments. Canad. J. Stat. '''26''': 95–105, 1998.
+
[22] Hoerl A.E., Kennard R.W. Ridge regression: biased estimation for non-orthogonal problems. Technometrics, 12:55–67, 1970.
  
:22. Hoerl, AE., Kennard, RW., Ridge regression: biased estimation for non-orthogonal problems. Technometrics. '''12''': 55–67, 1970.
+
[23] Hoerl A.E., Kennard R.W. Ridge regression: iterative estimation of the biasing parameter. Commun. Statist. Theor. Meth., 5:77–88, 1976.
  
:23. Hoerl, AE., Kennard, RW., Ridge regression: iterative estimation of the biasing parameter. Commun. Statist. Theor. Meth. '''5''', 77–88, 1976.
+
[24] Wencheko E. Estimation of the signal-to-noise in the linear regression model. Statist. Pap., 41:327–343, 2000.
  
:24. Wencheko, E., Estimation of the signal-to-noise in the linear regression model. Statist. Pap. '''41''': 327–343, 2000.
+
[25] Kibria B.M. Performance of some new ridge regression estimators. Commun. Statist. Simul. Computat., 32:419–435, 2003.
  
:25. Kibria, BM., Performance of some new ridge regression estimators, Commun. Statist. Simul. Computat. '''32: '''419–435,''' 2003.'''
+
[26] Mallows C.L. Some comments on Cp. Technometrics, 15:661–675, 1973.
  
:26. Mallows, CL., Some Comments on Cp, Technometrics. '''15''': 661–675, 1973.
+
[27] McDonald G.C., Galarneau D.I. A monte carlo evaluation of some ridge-type estimators. J. Amer. Statist. Assoc., 70:407–416, 1975.
  
:27. McDonald, GC., Galarneau, DI., A monte carlo evaluation of some ridge -type estimators. J. Amer. Statist. Assoc. '''70''': 407–416, 1975.
+
[28] Golub G.H., Heath M., Wahba G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21::215–223, 1979.
  
:28. Golub, GH., Heath, M., Wahba, G., Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics. '''21''': 215–223, 1979.
+
[29] Craven P., Wahba G. Smoothing noisy data with spline functions. Numer. Math., 31:377–403, 1978.
  
:29. Craven P., Wahba, G., Smoothing noisy data with spline functions. Numer. Math. '''31''': 377–403, 1978.
+
[30] Smith D.M., Hart F.A., Symond R.D., Walsh J.N. Analysis of Roman pottery from Colchester by inductively coupled plasma spectrometry. Sci. Archaeolog. Glasgow, 196:41-55, 1987.
  
:30. Smith, DM., Hart, FA., Symond, RD., Walsh, J.N. Analysis of Roman pottery from Colchester by inductively coupled plasma spectrometry. Sci. Archaeolog. Glasgow. '''196''': 41-55. 1987.
+
</div>

Latest revision as of 15:51, 14 December 2021

Abstract

In this article, two parameter estimation using penalized likelihood method in the linear mixed model is proposed. In addition, by considering the stochastic linear restriction for the vector of fixed effects parameters we are introduced the stochastic restricted two parameter estimation. Methods are proposed for estimating variance parameters when unknown. Also, the superiority conditions of the two parameter estimator over the best linear unbiased estimator, and the stochastic restricted two parameter estimator over the stochastic restricted best linear unbiased estimator are obtained under the mean square error matrix sense. Methods are proposed for estimating of the biasing parameters. Finally, a simulation study and a numerical example are given to evaluate the proposed estimators.

Keywords: Linear mixed model, two parameter estimation, stochastic restricted two parameter estimation, matrix mean square error

1. Introduction

Today many datasets lack the assumption of data independence, which is the main presupposition of many statistical models. For example data collected by cluster or hierarchical sampling, lengthwise studies and frequent measurements or in medical research that simultaneously provides data from one or more body members, the assumption of data independence is unacceptable because the data of a cluster, a group, or an individual are interdependent over time [1]. The default requirement for fitting linear models is the assumption of data independence that does not exist so the use of these models although it leads to unbiased estimates but the variance of estimating coefficients is strongly influenced by the default of data independence. In other words if the data are not independent then the standard error and therefore the confidence interval and the result test result will be for non-trust regression coefficients. Therefore in analyzing these data it is necessary to use methods that can consider this dependence. One of the most important ways to solve this problem is linear mixed models which are generalizations of simple linear models that provide the possibility of random and fixed effects with each other. Linear mixed models are used in many fields of physical, biological, medical and social sciences [2-5].

We consider the linear mixed model (LMM) as follows:

,
(1)

where is an vector of observations, with is an design matrix corresponding to the -th random effects factor and , is an observed design matrix for the fixed effects, is a parameter vector of unknown fixed effects, is a unobservable vector of random effects and is an unobservable vector of random errors. and are independent and have a multivariate normal distribution as


where and are and vectors of variance parameters corresponding to and , respectively. Henderson et al. [6-7] introduced the set of equations called mixed model equations, and obtained and as

where and . They and are called the best linear unbiased estimator (BLUE) and the best linear unbiased predictor (BLUP), respectively. One of the most common estimators in linear regression is the ordinary least squares (OLS) estimator, which in the case of multicollinearity may lead to estimates with adverse effects such as high variance [8]. To reduce the effects of multicollinearity. Liu et al. [9-10] proposed the ridge estimator and the Liu estimator respectively, which are the well-known alternatives of the OLS estimator. Yang and Chang [11] obtained the two parameter estimator “Using the mixed estimation technique introduced by Theil et al. [12-13]. They considered the prior information about in the form of restriction as where and are respectively the ridge, Liu parameters and the ridge estimator”.

In , authors such as Gilmour et al. [14], Jiming and Lahiri [15] and Patel and Patel [16], considered a state where the matrix is singular. Liu and Hu [10] and Eliot et al. [17] inquired the ridge prediction in LMM. Liu and Hu [10] are obtained and as

where and are the ridge estimator of and the ridge predictor of respectively. Özkale and Can [18] gave “an example from kidney failure data” to evaluate ridge estimator in linear mixed model. Kuran and Özkale [19] obtained the mixed and stochastic restricted ridge predictors by using Gilmour approach. They introduced “stochastic linear restriction as where is an vector, is an known matrix of rank and is an random vector that is assumed to be distributed with and where is vector of variance parameters corresponding to Also and are independent”

Then derived the stochastic restricted estimator of and the stochastic restricted predictor of respectively, as

Furthermore, they obtained the stochastic restricted ridge estimator of and the stochastic restricted ridge predictor of respectively, as

In this article, we obtain the new two parameter estimations in linear mixed models by taking Yang and Chang’s ideas [11] and considering restriction In Section we follow the idea of Henderson’s mixed model equations to get the two parameter estimator. Then, by setting stochastic linear restrictions on the vector of fixed effects parameters, we derive the stochastic restricted two parameter estimation. In Section 3, estimates for the variance parameters are obtained when unknown. In Section 4, under the mean square error matrix (MSEM) sense we offer comparisons of new two parameter estimators. In Section 5, Methods are proposed for estimating of the biasing parameters. In Sections 6 and 7, a simulation study and a real data analysis is given. Finally, summary and some conclusions are given in Section 8.

2. The proposed estimators

Under model (1), we have

and the joint distribution of and is given by

where and are nonsingular. If the restriction used by Yang and Ghang [11] in linear regression is transferred to linear mixed model, we can produce the two parameter estimator using “penalized term” idea. So by unifying restriction with model (1) to give

(2)

where

with

Then and are jointly distributed as

where .The conditional distribution of given is and the logarithm joint density of and given by

The penalized log-likelihood function is obtained by succession and as follows:

(3)

From Eq. (3), we get the partial derivative with respect to and then set the equations to zero and by using and to denote the solutions give

(4)
(5)

By solving the Eq. (5), is

(6)

Using into Eq. (4) we get

(7)


Also using this equation equals to

(8)


In Eq. (8), if we put is obtained as follows

(9)

Due to we get

(10)

Using Eq. (10), equals to .

In section, we obtain the stochastic restricted two parameter estimation. For this, the stochastic Linear restrictions can be unified to model (1) and the restriction to give

(11)

where

Then, the conditional distribution of given is and the logarithm joint density of and given by


Substituting and to , the penalized log-likelihood function is obtained as follows:

(12)


From Eq. (12), we get the partial derivative with respect to and then set the equations to zero and by using and to denote the solutions give

(13)
(14)


By solving these equations similar to Eqs. (4) and (5), the following results are obtained

(15)
(16)

3. Estimation of variance parameters

In linear mixed models, the variance parameter within and are often unknown that several methods have been proposed by [16,20-22] to estimate them. In this section, we estimate the variance parameters using the ML method. The marginal distribution of is , therefore we can write the marginal log-likelihood function of

(17)

where and and are vectors of unknown parameters, respectively. Differentiating the Eq. (17) with respect to and the partial derivatives is obtained as

(18)
(19)
(20)

where . Setting Eqs. (18)-(20) equal to zero and using and and instead of and gives

(21)
(22)
(23)

Solving Eqs. (21) and (23) yields the estimators

(24)
(25)


Eq. (23) depends on and so iterative procedures must be used to solve 's. In the statistical literature, there are four iterative procedures to estimates variance parameters, which include: "Newton-Raphson (NR), Expectation Maximization algorithm (EM), Fisher Scoring (FS) and the Average Information (AI) algorithms". See [22] for details of these procedures. Note that in the stochastic restricted two parameter methods, the ML estimators is obtained similar Eqs. (21),(22) and (23).

4. Comparison of estimators

In this section, we compare the estimator with and the estimator with using the mean squares error matrix (MSEM) sense. The estimator is superior to with respect to the MSEM sense, if and only if that is, is a positive definite (pd) matrix. The mean-square error matrix for the estimator is given as

The variance matrix of is

where The bias is given as

bias

The mean-square error matrix for the estimator is given as

where

and


4.1 Comparison the Estimator β̃(k,d) with β̃

The of is where and the of is


The estimator is superior to the estimator under the MSEM sense, if and only if then


According to Farebrother if then the necessary and sufficient condition for to be superior to is .

5. Selection of parameters k and d

In the linear regression model, choosing the parameter is important so many statisticians were suggested several methods for obtaining this parameter. These methods were proposed by many researchers [23-29] and others. According to Ozkale and Can [18], we rewrite model (1) in the form of a marginal model in which random effects are not explicitly defined:

(26)


Because is pd, then there is a nonsingular symmetric matrix such that . If we multiply both sides of model (42) by , we get , where , and . The matrix is symmetric, so there is an orthogonal matrix such that and where are the ordered eigenvalues of . Then model (26) can be rewritten in a canonical form as , where and . Under this model, we get the following representation:

We use to find the optimal values of and , where is the mean square error. Let fixed, the optimal value of the can be obtained by minimizing the following statement

Notice that Therefore, getting we obtain Since the optimal depends on unknown and then according Hoerl and Kennard [9] we can get the estimate of by substituting and as follows:

(27)

where and are the unbiased estimators and . According to the estimator of which Kibria [25] and Hoerl and Kennard [9] proposed, the harmonic mean value of in (27) is

Now, let fixed, and we get the optimal value by minimizing . Therefore getting

(28)


Since is always be positive, in this section we get the positive condition of the estimator in Eq. (27). For this purpose, we use the following theorem.

Theorem 5.1

If , for all then are always positive.

Proof.

If , then the values of are positive. Since and must be positive for all . Then we get and because depends on the unknown parameters and so their unbiased estimators are replaced. Therefore is always positive if is selected as .

Note that is always less than one and since must be between zero and one, we consider the inequality as follows

(29)


Since in (28) depends on and the estimators of in depend on , we consider an iterative method for these parameters by applying the following method. Step calculate from Eq. (29). Step 2, calculate by using in step 1 . Step calculate from Eq. (28) by using the estimator in Step 2 . Step if is not between zero and one use

6. A simulation study

In this section, we compare the performance of and with a simulation study. For this purpose, we calculate the estimated mean square error (EMSE) with various values of sample size, variance and degree of collinearity. Following McDonald and Galarneau [27], we are computed the fixed effects as

(30)

where independent standard normal are pseudo-random numbers and is the correlation between any two fixed effects. Three different sets of were considered as 0.75,0.85 and The matrix is produced in a completely randomized design. Observation on responses are then determined by

(31)


We consider two designs that in the first design and in the second design . Also, the same value and are taken in both designs. Following Ozcale and Can [18], the vector was chosen as the eigenvector corresponding to the largest eigenvalue of the matrix”. The variances matrix of random effects and are and , respectively. They are generated from the normal distribution . We are considered and .The trial was replicated 1000 times by generating and For each simulated data set derived and and then the estimated mean squared error (EMSE) calculated as calculated the relative mean square (RMSE) as

when is greater than one, it indicates that the estimator superior to the estimator . For the stochastic linear restriction the matrix is and generated from the normal distribution and the matrix is generated from the normal distribution where is taken as ) with . In Table 1, we obtain the values of EMSE and RMSE for and . We have the following results for Table 1:

(i) In the whole table, the EMSE values of is less than . Also, the EMSE values of is less than . In general, the EMSE values of is less than all estimators.

(ii) As and increase, the EMSE values of the estimators increase.

(iii) As increases, difference between the EMSE values of the two parameter estimators and the EMSE values of the best linear unbiased estimators increase. This implies an increase in the improvement of the two-parameter estimators.

Table 1. Estimated and SMSE values with and
ni 3 7 3 7
0.0001526 7.3974e-05 0.0003053 0.0001479
0.0001461 7.1668e-05 0.0002923 0.0001433
0.0001032 6.9166e-05 0.0001912 0.0001332
9.9335e-05 6.7089e-05 0.0001845 0.0001293
1.4795 1.0695 1.5965 1.1104
1.4716 1.0682 1.5845 1.1083
1.0389 1.0309 1.0363 1.0301
ni 3 7 3 7
0.0002198 0.0003460 0.0004396 0.0002387
0.0002076 0.0001134 0.0004153 0.0002269
0.0001332 0.0001045 0.0002562 0.0002111
0.0001277 0.0002460 0.0002459 0.0002014
1.6495 1.1414 1.7156 1.1308
1.6263 1.1373 1.6889 1.1266
1.0430 1.1219 1.0418 1.0481
ni 3 7 3 7
0.0005729 0.0003460 0.0011459 0.0006920
0.0005033 0.0003044 0.0010066 0.0006088
0.0002762 0.0002760 0.0005201 0.0005272
0.0002578 0.0002460 0.0004927 0.0004716
2.0743 1.2533 2.2030 1.3125
1.9524 1.2369 2.0429 1.2908
1.0713 1.1219 1.0556 1.1178

7. Real data analysis

We consider a data set, which is known as the Egyptian pottery data to show the behavior of the new Restricted and Unrestricted Two-Parameter Estimators. This data set arises from an extensive archaeological survey of pottery production and distribution in the ancient Egyptian city of Al-Amarna. The data consist of measurements of chemical contents (mineral elements) made on many samples of pottery using two different techniques, NAA and ICP (see Smith et al. [30] for description of techniques). The set of pottery was collected from different locations around the city. We fit the data set by linear mixed model as where is vector of response variables, and which are regression matrix with dimensions and respectively. First, we are estimated the variance components by consider and Then, by calculating the eigenvalues of the condition number 8322860 is obtained, which indicate severe multicollinearity. We considered the stochastic linear restrictions as and selected 3 available data in the previous sections to the Egyptian pottery data. and are obtained using the iterative method introduced at the end of section 5,0.348 and 0.373 respectively. In Table 2, the estimated values of the estimators are obtained by replacing in the corresponding theoretical MSE equations. We can see the estimated MSE values of is less than . Also, the estimated MSE values of is less than . In general, the estimated MSE values of is less than all estimators. So, we conclude that the stochastic restricted two parameter estimator performs better than the other estimators. Note that in the results obtained for this data, all eigenvalues of and are positive and the condition of Theorem 4.1 and Theorem 4.2 are true. In Figure 1, a plot of the estimated MSE values of and against in the interval [0,2] with fixed is drawn. Because is not dependent on , its estimated value is the same for all values. It is obvious that estimated values of is always less than . Altogether, it is obvious that the two parameter estimators can perform better than the in MSEM criterion under conditions.

Table 2. Estimated MSE values of the proposed estimators
EMSE 1.216069 1.201379 0.1544953 0.1541745


Review 312425094190-image1.png
Figure 1. The estimated mean square error values of the estimators versus with

8. Conclusion

In this article, we proposed the two parameter estimator and the stochastic restricted two parameter estimator to overcome the effects of the multicollinearity problem in linear mixed models. We also obtained estimates of variance parameters and then using the mean squared error matrix sense made comparisons between the proposed estimators and some other estimators. Finally, we proposed methods for estimating biasing parameters and provided a simulation study and a data example to illustrate performance of new estimators.

Disclosure statement

No potential conflict of interest was reported by the authors..

Funding

The research was supported by the Slamic Azad University.

References

[1] Zenzile T.G. Community health workers' understanding of their role in rendering maternal, child and women's health services. Diss. North-West University, 2018.

[2] Fiona S., Dierenfeld E.S., Langley-Evans S.C., Hamilton E., Lark R.M., Yon L., Watts M.J. Potential bio-indicators for assessment of mineral status in elephants. Sci. Rep-UK., 10(1):1-14, 2020.

[3] Rajat K., Guler I., Nerkar A. Entangled decisions: Knowledge interdependencies and terminations of patented inventions in the pharmaceutical industry. Strateg. Manage. J., 39(9):2439-2465, 2018.

[4] Adam, L., Corsi D.J., Venechuk G.E. Schools influence adolescent e-cigarette use, but when? Examining the interdependent association between school context and teen vaping over time. J. Youth. Adolescence, 48(10):1899-1911, 2019.

[5] Zhiyi C., Zhu S., Niu Q., Zuo T. Knowledge discovery and recommendation with linear mixed model. IEEE Access., 8:38304-38317, 2020.

[6] Henderson C.R. Estimation of genetic parameters. Ann. Math. Stat., 21:309–310, 1950.

[7] Henderson C.R., Searle S.R., VonKrosig C.N. Estimation of environmental and genetic trends from records subject to culling. Biometrics, 15:192–218, 1959.

[8] Farebrother R.W. Further results on the mean square error of ridge regression. J. Royal. Stat. Soc., Series B (Methodological), 38(3):248-250, 1976.

[9] Liu K. A new class of biased estimate in linear regression. Commun. Statist. Theor. Meth., 22(2):393–402, 1993.

[10] Liu X.Q., Hu P. General ridge predictors in a mixed linear model. J. Theor. Appl. Stat., 47:363–378, 2013.

[11] Yang H., Chang X. A new two-parameter estimator in linear regression. Commun. Statist. Theor. Meth., 39:923–934, 2010.

[12] Theil H., Goldberger A.S. On pure and mixed statistical estimation in economics. Int. Econ. Rev., 2:65–78, 1961.

[13] Theil H. On the use of incomplete prior information in regression analysis. J. Amer. Statist. Assoc., 58,401–414, 1963.

[14] Gilmour A.R., Cullis B.R., Welham S.J., Gogel B.J., Thompson R. An efficient computing strategy for predicting in mixed linear models. Comput. Statist. Data Anal., 44:571–586, 2004.

[15] Jiming J., Lahiri P. Mixed model prediction and small area estimation. Test., 15(1):1-96, 2006.

[16] Patel S.R., Patel N.P. Mixed effect exponential linear model. Commun. Statist. Theor. Meth., 21(9):2721-2740, 1992.

[17] Eliot M.N., Ferguson J., Reilly M.P., Foulkes A.S. Ridge regression for longitudinal biomarker data. Int. J. Biostat., 7:1–11, 2011.

[18] Özkale M.R., Can F. An evaluation of ridge estimator in linear mixed models: an example from kidney failure data. J. Appl. Statist., 44(12):2251–2269, 2017.

[19] Kuran O., Ózkale M.R. Gilmour’s approach to mixed and stochastic restricted ridge predictions in linear mixed models. Linear. Algebra. Appl., 508:22–47, 2016.

[20] Hartley H.O., Rao J.N. Maximum-likelihood estimation for the mixed analysis of variance model. Biometrika, 54:93–108, 1967.

[21] Lee Y., Nelder J.A. Generalized linear models for the analysis of quality-improvement experiments. Canad. J. Stat., 26(1):95–105, 1998.

[22] Hoerl A.E., Kennard R.W. Ridge regression: biased estimation for non-orthogonal problems. Technometrics, 12:55–67, 1970.

[23] Hoerl A.E., Kennard R.W. Ridge regression: iterative estimation of the biasing parameter. Commun. Statist. Theor. Meth., 5:77–88, 1976.

[24] Wencheko E. Estimation of the signal-to-noise in the linear regression model. Statist. Pap., 41:327–343, 2000.

[25] Kibria B.M. Performance of some new ridge regression estimators. Commun. Statist. Simul. Computat., 32:419–435, 2003.

[26] Mallows C.L. Some comments on Cp. Technometrics, 15:661–675, 1973.

[27] McDonald G.C., Galarneau D.I. A monte carlo evaluation of some ridge-type estimators. J. Amer. Statist. Assoc., 70:407–416, 1975.

[28] Golub G.H., Heath M., Wahba G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics, 21::215–223, 1979.

[29] Craven P., Wahba G. Smoothing noisy data with spline functions. Numer. Math., 31:377–403, 1978.

[30] Smith D.M., Hart F.A., Symond R.D., Walsh J.N. Analysis of Roman pottery from Colchester by inductively coupled plasma spectrometry. Sci. Archaeolog. Glasgow, 196:41-55, 1987.

Back to Top

Document information

Published on 04/06/21
Accepted on 04/06/21
Submitted on 20/03/21

Volume 37, Issue 2, 2021
DOI: 10.23967/j.rimni.2021.06.001
Licence: CC BY-NC-SA license

Document Score

0

Views 225
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?