CS代考 ECMT2130 – 2021 semester 1 final exam solutions – cscodehelp代写
1 ECMT2130 – 2021 semester 1 final exam solutions
1. (10 points) CAPM model risk-free rate of return
Jeff believes that he can achieve excess returns on his portfolio by concentrating his investment in the US software industry. Before committing to that investment strategy, though, he wants explore evidence supporting his investment approach so he estimates the following model of asset returns:
rst −rft =αs +βs(rmt −rft)+est (1)
Copyright By cscodehelp代写 加微信 cscodehelp
where, in time period t:
• rst is the monthly rate of return on the total return index for the software industry;
• rmt is the monthly rate of return on the total return index for the software industry;
• rft is the monthly risk-free rate of return; and
• est is the shock to the monthly rate of return on the total return index for the software industry.
To estimate the regression, Jeff uses the United States 1-month Treasury Bill (T-bill) simple interest rate to approximate the risk-free rate of return. Over the sample period, the graph of the 1 month treasury bill rate is as shown below:
(a) (2 points) This regression equation is based upon the version of the Capital Asset Pricing Model that assumes investors can borrow or lend as much as they wish at the risk-free rate of return. In light of this assumption, critique Jeff’s use of the 1-month treasury-bill rate as the risk-free rate of return. Include an assessment of default risk and inflation uncertainty in your answer.
(b) (2 points) How can you reconcile the non-zero standard deviation of the one-month T-Bill rate with the CAPM assumption that the risk-free rate of return is a guaranteed rate of return, with no associated uncertainty?
(c) (5 points) Jeff estimates the model and obtains the following regression results. Table 1: Results
market excess returns
Observations R2
Residual Std. Error F Statistic
Dependent variable:
industry excess returns CAPM
0.997 (0.045)
0.350 (0.198)
2.228 (df = 136) 481.773∗∗∗ (df = 1; 136)
∗p<0.1; ∗∗p<0.05; ∗∗∗p<0.01
At the 5% level of significance, conduct a formal hypothesis test to explore the availability of evidence that the industry generates consistent returns above those that should be expected given the time-value of money and the industry’s exposure to systematic risk. Include a diagram showing the test statistic distribution under the null hypothesis, along with the rejection region.
(d) (1 point) Given that the standard deviation of the excess returns on the market are 4.191%, quan- titatively compare the importance of systematic and non-systematic risk in driving the industry’s excess return variability.
(a) Jeff’s use of the 1-month T-bill rate as the risk-free rate of return is reasonable if investors have a 1-month investment horizon. The return on the 1-month T-bill rate is known to the investor, over that horizon and so the only source of uncertainty for them is the risk of default by the government who issued the T bills. That risk is low enough over a 1-month time horizon that it can be ignored. Investors should be concerned about the real rate of return rather than the nominal rate of return. Uncertainty about inflation over the investment horizon will cause uncertainty about the real return on the 1-month T-bill real rate of return. However, with low and stable inflation, this will also be a small enough level of uncertainty that it is reasonable to ignore it.
(b) Over the time-period shown in the graph, the 1-month T-bill rate exhibits variation. This reflects changes in the inflation rate and changes in the time-value of money, and changes in monetary policy. However, at any one point in time, the rate of return on the 1-month T-bill, over the next month, is certain. It will likely be different from the rate of return in other months but to the investor, it is known. In summary, historical variability is not a good measure of the level of uncertainty associated with the 1-month T-bill rate of return over the coming month.
(c) Testing for alpha returns for the software industry.
1. We are testing null hypothesis H0: αs ≤ 0 against alternative hypothesis, H1: α > 0 at the 5% level of significance.
2. The test statistic is the t-ratio:
t ∗ = αˆ s − 0 S E ( αˆ s )
3. Under the null hypothesis, this test statistic has a Student’s t-distribution, even with small samples, if the errors in the regression are normally distributed. If the distribution of the errors deviates too much from normal (perhaps because of fat-tails) then we can still work with the Student’s t distribution under the assumption that the sample size of 138 months is large enough to rely on the central limit theorem.
4. It is an upper-tail test so the decision rule is to reject the null hypothesis if the test statistic is above the critical value t0.05,136 = 1.656.
5. The test statistic is 1.769. This is greater than the critical value so we have sufficient evidence, at the 5% level of significance, to reject the null hypothesis. We can conclude that the software industry’s monthly rate of return is above the monthly rate of return that can be explained by the time-value of money and the industry’s systematic risk exposure.
(d) If Jeff held a portfolio that replicated the returns of the software industry then the variation in his returns could be attributed to:
1. variation in excess market returns, scaled by the software industry’s CAPM beta which is extremely close to 1; and
2. idiosyncratic variation.
Assuming a CAPM Beta of 1, the standard deviation of monthly excess market returns, 4.191%, is almost double the standard deviation of idiosyncratic shocks to the software industry’s monthly excess returns, as measured by the residual standard error, 2.228%. Thus, the varia- tion in portfolio returns for Jeff would be dominated by systematic risk but idiosyncratic risk would not be negligible.
2. (10 points) ARMA model
xt is a stochastic process described by the equation:
xt = (1 + 0.1L − 0.6L2)et (2)
where et are independently and identically distributed shocks with mean 0 and variance 1.
(a) (1 point) What order moving average is this model?
(b) (2 points) Show that the moving average model is invertible.
(c) (2 points) Derive the unconditional variance of xt.
(d) (2 points) Derive the correlation of xt and xt−1.
(e) (1 point) Derive the correlation of xt and xt−2.
(f) (1 point) What is the correlation of xt and xt−k for all k ≥ 3?
(g) (1 point) Given these autocorrelations, what identifying feature would you expect to observe in the autocorrelation function for a sufficiently long realisation of the stochastic process.
(a) The stochastic process for xt is a MA(2) model, a moving average of order 2.
(b) The MA(2) is invertible if the roots of the polynomial in the lag operator lie strictly outside the unit circle in the complex plane. The polynomial in the lag operator is (1 + 0.1L − 0.6L2) = 0. The roots of this quadratic in L can be found using the quadratic formula:
−0.1±0.12 −4×(−0.6)×1 2 × (−0.6)
This gives the two real roots (to 2 decimal places):
L1 = −1.21
Both of these roots are more than 1 unit from the origin because their real parts are strictly
greater than 1 in absolute value. Thus, the MA(2) is invertible.
(c) Given that the unconditional mean of xt is 0, the unconditional variance of xt obtains from:
V ar(xt) = E (et + 0.1et−1 − 0.6et−2)2
Given the shocks are mean zero and independent of each other, this can be written as:
V ar(xt) = V ar(et) + 0.12V ar(et−1) + 0.62V ar(et−2) Simplifying, we obtain:
V ar(xt) = V ar(et) + 0.12V ar(et−1) + 0.62V ar(et−2) = 1+0.01+0.36
(d) Given that the unconditional mean of xt is 0, the covariance of xt and xt−1 can be found from:
Cov(xt, xt−1) = E [(et + 0.1et−1 − 0.6et−2)(et−1 + 0.1et−2 − 0.6et−3)]
Using the fact that the shocks are mean zero and independent of each other, this can be expanded and with the expected values of the product of shocks in different time periods set to zero, written as:
Cov(xt, xt−1) = 0.1V ar(et−1) − 0.06V ar(et−2) = 0.04
Dividing by the unconditional variance, we obtain the correlation as 0.04/1.37 = 0.029 to 3 decimal places.
(e) Given that the unconditional mean of xt is 0, the covariance of xt and xt−2 can be found from: Cov(xt, xt−2) = E [(et + 0.1et−1 − 0.6et−2)(et−2 + 0.1et−3 − 0.6et−4)]
Using the fact that the shocks are mean zero and independent of each other, this can be expanded and with the expected values of the product of shocks in different time periods set to zero, written as:
Cov(xt, xt−1) = −0.6V ar(et−2) = −0.6
Dividing by the unconditional variance, we obtain the correlation as −0.6/1.37 = −0.438 to 3 decimal places.
(f) Using the same approach, it can be shown that the covariances, and so correlations, between xt andxt−k fork≥3isequaltozero.
(g) The ACF would end abruptly at lag 2, with all autocorrelation estimates at longer lags being expected to be insignificantly different from zero. This is why the abrupt cutoff in the ACF at lag 2 indicates that the stochastic process is an MA(2).
3. (10 points) Greg’s GJR GARCH model
Greg is of the view that big negative shocks are more important than big positive shocks in causing periods of high volatility. To test his world view, Greg estimates the following variant of a GARCH(1,1) model for financial returns with mean equation:
and variance equation:
• ut = σtet
+γI u2 +δσ2 t−1 t−1 t−1
• the shocks, et, are i.i.d. et ∼ N (0, 1)
• It is a variable that takes a value of 1 when the shock, ut, is negative and that takes a value of 0 otherwise.
(a) (3 points) Suggest non-negativity constraints for the coefficients in the variance equation.
(b) (2 points) What stylised fact about financial returns is Greg able to capture by making this change to the standard GARCH(1,1) model?
(c) (5 points) Using 20 years of monthly data, Greg estimates his variant of the GARCH(1,1) model, obtaining a maximised log-likelihood value of −1056.038 and he estimates a standard GARCH(1,1) model, obtaining a maximised log-likelihood value of −1065.247. Perform an appropriate hypothesis test, at the 5% level of significance, to assess the strength of the evidence that there is a difference in the conditional volatility impact of negative shocks compared to positive shocks.
(a) To ensure that the conditional variance remains non-negative, we require α ≥ 0, beta ≥ 0, and δ ≥ 0 and we also require that α ≥ −γ. This constraint on γ means that it possible for the estimated model to find that negative shocks have more effect, less effect or the same effect on conditional volatility as positive shocks. The constraint is relative to α because we need all shocks, positive and negative, to have a net increasing impact on conditional volatility.
(b) The model is designed to allow for leverage effects to be captured. Negative shocks can have an impact on the conditional variance that is greater than the impact of an equal size positive shock if γ ≥ 0.
(c) Likelihood ratio test for asymmetric effects of negative shocks on volatility.
1. The null hypothesis is γ = 0 and the alternative hypothesis is γ ̸= 0.
2. The likelihood ratio test statistic is LR∗ = −2(LLRR − LLRU ) = −2 × (1056.038 − 1065.247) = 18.418.
3. Given that the model has been estimated with 20 years of monthly data, there are enough observations to presume that, based on the central limit theorem, the test statistic has an asymptotic Chi-squared distribution with 1 degree of freedom.
4. The test is an upper-tail test. Reject the null hypothesis if the test statistic lies above the critical value χ(0.05,1) = 3.841. Otherwise, fail to reject the null hypothesis.
5. The test statistic does lie in the rejection region so we have sufficient evidence, at the 5% level of significance to reject the null hypothesis. There does appear to be evidence of a stronger effect on the level of shock volatility than for negative shocks, compared to positive shocks.
4. (10 points) market efficiency
A new fund manager in the USA builds his business around ethical investment principles. He commits
to not investing in any organisations that are involved in:
• Fossil fuel extraction • Weapons manufacture • Tobacco production
• Gambling
After comprehensively reviewing the companies operating in the US, he finds that he reduced the number of possible companies to invest in by 75%, measured by market capitalisation.
Using monthly simple rates of return, he then constructs an “ethical efficient frontier” from the risky assets in the remaining 25% of the companies that he deems to operate ethically.
He then invests all of his clients’ funds in the tangency portfolio along that efficient frontier, obtained by maximising the using the 1-month US treasury-bill interest rate as the risk-free interest rate.
(a) (3 points) Explain why the fund manager does not need to tailor individual portfolios for each of his expected-utility-maximising clients to take into account their different levels of risk aversion.
(b) (2 points) Over the long term would you expect his clients to be better off financially compared to simply investing in a fund that held the market portfolio without leaving out companies on the basis of ethical considerations?
(c) (3 points) As part of his marketing campaign, he retrospectively determines the monthly rate of return that would have been earned by his fund had it existed over the previous 19 months and he uses that data to estimate a Jensen-style time-series regression that explains variation in his fund’s excess returns over the risk-free rate of return in terms of an intercept and the excess returns on the market, defined as all companies, “ethical” and otherwise. He finds that the estimate of the intercept is positive and significant, even at the 15% level and he describes that as the reward for good behaviour. As a well-informed client, suggest reasons why the analysis behind this marketing campaign is flawed.
(d) (2 points) Again as a well-informed client, you ask the “ethical” fund manager to check the nor- mality of the errors in the regression described in part C. Testing the residuals from the regression, he obliges by reporting a Jarque-Bera test statistic of 14.82. What are the implications of that test result for using t-tests to conduct inference about the intercept of the regression?
(a) By creating a fund that corresponds to the tangency portfolio on the ethical efficient frontier, he enables clients to select their own portfolio along the associated capital allocation line to match their level of risk aversion by splitting their portfolio between his fund and the risk-free asset. In that way, all clients can optimise their expected utility while investing the at-risk part of their wealth in his fund.
(b) The ethical efficient frontier cannot be more efficient, and is highly likely to be less efficient than the efficient frontier constructed by allowing investment in all companies. Thus, his clients, over the long-term, will tend to be less well off than would be the case if they invested in a fund with the same investment strategy but without restrictions on the set of risky assets that could be invested in. Their returns would have lower average returns or higher return variability, or both.
(c) There are many ways you could critique the Jensen-style CAPM marketing campaign based on the reward for good behaviour. Any two of the following 3 issues would earn full marks:
1. The chosen level if significance is unusually high. It would be more convincing if the level of significance was 5% or even 1%.
2. The regression model uses returns on a made-up portfolio, not actual historical returns on a portfolio that was formed using a set of investment principles decided before the relevant returns information were available. Using available information about returns for various industries, it would be possible to design the “ethical criteria” so that the exclusions driven by ethical considerations led to a finding of a significant and positive alpha. This criticism could only be addressed by assessing actual returns on the portfolio going forward.
3. The chosen history of data is short and seems to be very specifically designed. It would be important to check how robust the finding of a significant positive reward for good behaviour is to changes in that time horizon.
4. The Jarque-Bera test is for non-normality. It tends to perform better with large samples and the given sample is small. However, assuming that there is enough data to work with a Chi-squared test statistic distribution, under the null hypothesis (of error normality) then the test statistic lies in the rejection region, above the critical value implied by any reasonable choice of significance level (.e.g with 2 degrees of freedom, the 10% significance level critical value is 4.61. Thus, the test statistic suggests that there is evidence that the errors are not normal. With a small sample, the use of the Student’s t distribution for conducting hypothesis tests is in doubt.
程序代写 CS代考 加微信: cscodehelp QQ: 2235208643 Email: kyit630461@163.com