The variance of Y is not finite. This is because an alpha-stable variable X with α=3/2 (a Holtzmark distribution) does have a finite expectation μ but its variance is infinite. If Y had a finite variance σ2, then by exploiting the independence of the Xi and the definition of variance we could compute
σ2=Var(Y)=E(Y2)−E(Y)2=E(X21X22X23)−E(X1X2X3)2=E(X2)3−(E(X)3)2=(Var(X)+E(X)2)3−μ6=(Var(X)+μ2)3−μ6.
This cubic equation in Var(X) has at least one real solution (and up to three solutions, but no more), implying Var(X) would be finite--but it's not. This contradiction proves the claim.
Let's turn to the second question.
Any sample quantile converges to the true quantile as the sample grows large. The next few paragraphs prove this general point.
Let the associated probability be q=0.01 (or any other value between 0 and 1, exclusive). Write F for the distribution function, so that Zq=F−1(q) is the qth quantile.
All we need to assume is that F−1 (the quantile function) is continuous. This assures us that for any ϵ>0 there are probabilities q−<q and q+>q for which
F(Zq−ϵ)=q−,F(Zq+ϵ)=q+,
and that as ϵ→0, the limit of the interval [q−,q+] is {q}.
Consider any iid sample of size n. The number of elements of this sample that are less than Zq− has a Binomial(q−,n) distribution, because each element independently has a chance q− of being less than Zq−. The Central Limit Theorem (the usual one!) implies that for sufficiently large n, the number of elements less than Zq− is given by a Normal distribution with mean nq− and variance nq−(1−q−) (to an arbitrarily good approximation). Let the CDF of the standard Normal distribution be Φ. The chance that this quantity exceeds nq therefore is arbitrarily close to
1−Φ(nq−nq−nq−(1−q−)−−−−−−−−−−√)=1−Φ(n−−√q−q−q−(1−q−)−−−−−−−−−√).
Because the argument on Φ on the right hand side is a fixed multiple of n−−√, it grows arbitrarily large as n grows. Since Φ is a CDF, its value approaches arbitrarily close to 1, showing the limiting value of this probability is zero.
In words: in the limit, it is almost surely the case that nq of the sample elements are not less than Zq−. An analogous argument proves it is almost surely the case that nq of the sample elements are not greater than Zq+. Together, these imply the q quantile of a sufficiently large sample is extremely likely to lie between Zq−ϵ and Zq+ϵ.
That's all we need in order to know that simulation will work. You may choose any desired degree of accuracy ϵ and confidence level 1−α and know that for a sufficiently large sample size n, the order statistic closest to nq in that sample will have a chance at least 1−α of being within ϵ of the true quantile Zq.
Having established that a simulation will work, the rest is easy. Confidence limits can be obtained from limits for the Binomial distribution and then back-transformed. Further explanation (for the q=0.50 quantile, but generalizing to all quantiles) can be found in the answers at Central limit theorem for sample medians.
The q=0.01 quantile of Y is negative. Its sampling distribution is highly skewed. To reduce the skew, this figure shows a histogram of the logarithms of the negatives of 1,000 simulated samples of n=300 values of Y.
library(stabledist)
n <- 3e2
q <- 0.01
n.sim <- 1e3
Y.q <- replicate(n.sim, {
Y <- apply(matrix(rstable(3*n, 3/2, 0, 1, 1), nrow=3), 2, prod) - 1
log(-quantile(Y, 0.01))
})
m <- median(-exp(Y.q))
hist(Y.q, freq=FALSE,
main=paste("Histogram of the", q, "quantile of Y for", n.sim, "iterations" ),
xlab="Log(-Y_q)",
sub=paste("Median is", signif(m, 4),
"Negative log is", signif(log(-m), 4)),
cex.sub=0.8)
abline(v=log(-m), col="Red", lwd=2)