由买买提看人间百态

topics

全部话题 - 话题: likelihood
首页 上页 1 2 3 4 5 6 7 8 9 10 下页 末页 (共10页)
w******s
发帖数: 16209
1
The United States Patent and Trademark Office has issued a refusal on Google
's Nexus One application.
According to nexus-one.co.uk Google filed an application for the trademark
back on 10th December 2009 for use in connection with mobile phones.
The Trademark Office recently issued a notice of refusal:
SECTION 2(d) REFUSAL ? LIKELIHOOD OF CONFUSION
Registration of the applied-for mark is refused because of a likelihood of
confusion with the mark in U.S. Registration No. 3554195.
"The refusal ha
c*******i
发帖数: 951
2
来自主题: Apple版 - Why Gizmodo break the law?
1)In CA, "knowingly receiving stolen property" is a crime.
2)The keyword is "knowingly" -- you only need to know there is a strong
likelihood that it was stolen
3)Gizmodo paid $5,000 iPhone4G, knowing there is a strong likelihood that
it was stolen
4)So the case becomes criminal, and the police (not Apple)decided to issue
a warrant (Police want to find the seller and question him)
5)A committed crime cannot be withdrawn, and thus it is irrelevant that
Gizmodo returned the iPhone4g to Apple
a***y
发帖数: 19743
3
来自主题: Apple版 - [合集] Why Gizmodo break the law?
☆─────────────────────────────────────☆
cuicuicui (cuicuicui) 于 (Tue Apr 27 12:56:05 2010, 美东) 提到:
1)In CA, "knowingly receiving stolen property" is a crime.
2)The keyword is "knowingly" -- you only need to know there is a strong
likelihood that it was stolen
3)Gizmodo paid $5,000 iPhone4G, knowing there is a strong likelihood that
it was stolen
4)So the case becomes criminal, and the police (not Apple)decided to issue
a warrant (Police want to find the seller and question him)
5)A committ... 阅读全帖
w********2
发帖数: 16371
4
Shares of Apple (AAPL) are up $12.84, or 2.5%, at $522.30 this morning after
an appearance yesterday afternoon by chief executive Tim Cook yesterday at
the Goldman Sachs technology conference, where he was interviewed by Goldman
hardware analyst Bill Shope.
The takeaway is that Cook’s remarks on the prospect of a dividend ? he didn
’t commit to anything but indicated a willingness to consider all options ?
and his discussion of the AppleTV product, are tantalizing hints at future
actions.
For hi... 阅读全帖
K****n
发帖数: 5970
5
来自主题: CS版 - probit regression一问 (转载)
【 以下文字转载自 Computation 讨论区 】
发信人: KeeVan (Kevin), 信区: Computation
标 题: probit regression一问
发信站: BBS 未名空间站 (Fri Aug 22 21:53:32 2008)
请问有没有现成的教材把maximum likelihood的导数求出来的? 我想对一下,网上居然
google不出来... 我不太放心matlab里的glm方程之类的,那个training的时候震荡比较
大.
另外如果对probit方程的参数设一个gaussian prior,然后求bayesian的
P(data)=Integrate(P(data|parameter)*P(parameter),over parameter)
好像这里用probit方程作P(data|parameter),用Gaussian作P(parameter),在optimize
bayeisan likelihood的时候比较好算?不知道有没有人已经算过?又google不出来...
谢谢!
v*******e
发帖数: 11604
6
来自主题: Programming版 - R 语言求解惑
程序员搞起统计来了。。。你问的都是统计问题,不是R的问题。
1)glm 它就是个迭代的算某一类特定model参数的程序/方法,当然要算到收敛为止。
没听说glm里面还有forward/backward/stepwise这类的东西。
2)AIC,BIC这类东西是用来选model的,不是用来算model参数的。model里面要包含哪
些变量,不包含哪些变量(比如没有多少影响的变量就别包含在内了),这是AIC,BIC
之类东西的用处。如果你用它来决定你的general linear model里面需要包含哪些变量
,当然要和glm()交替运用。你先选一些变量做成model,然后用glm()算出这个
model的参数和likelihood,再增/减变量,再用glm()算出参数和likelihood,然后你
就能用AIC决定要不要把这增/减的变量包含在内。
3)wikipedia有简短介绍。

stepwise
w**********y
发帖数: 1691
7
likelihood
A personal subjective suggestion: simply devide your log likelihood by the
number of your data, then you will have the sense of the goodness of fit.
Mean error
training error and true (predictive) error
-I didn't know how people utilized "cross validation" with 'holdout' data,
until I worked in an insurance company. Theoratically, what they did is not
that perfect.
AIC, BIC
There is no big difference for model evaluation in theory between linear and
non-linear regression. Just harder
K**********e
发帖数: 188
8
来自主题: Biology版 - 天皇陛下也是生物猥琐男啊
Gene
Volume 427, Issues 1-2, 31 December 2008, Pages 7-18

Evolution of Pacific Ocean and the Sea of Japan populations of the gobiid
species, Pterogobius elapoides and
Pterogobius zonoleucus, based on molecular and morphological analyses
Akihitoa, Akishinonomiya Fumihitob, c, Yuji Ikedad, Masahiro Aizawad,
Takashi Makinoe, 1, Yumi
Umeharae, Yoshiaki Kaif, Yuriko Nishimotob, g, Masami Hasegawab, e, h,
Tetsuji Nakaboi and Takashi
Gojoborib, e,
aThe Imperial Residence, 1-1 Chiyoda, Chiyoda-ku,... 阅读全帖
s******y
发帖数: 17729
9
来自主题: Biology版 - 诺委会太不像话了 (转载)
【 以下文字转载自 Military 讨论区 】
发信人: rbs (jay), 信区: Military
标 题: 诺委会太不像话了 (转载)
发信站: BBS 未名空间站 (Wed Oct 4 10:58:51 2017, 美东)
发信人: rbs (jay), 信区: Joke
标 题: 诺委会太不像话了
发信站: BBS 未名空间站 (Wed Oct 4 10:23:52 2017, 美东)
尼玛87篇参考文献就不能加一篇施教授或能教授的CNS?
References
1. Ruska, E., Nobel Lectures, Physics 1981-1990, Tore Frängsmyr and
Gösta Ekspong, Eds. (1993) World Scientific Publishing, Singapore
2. Marton, L. (1934) Electron microscopy of biological objects. Nature
133, 911-911
3. Althoff, T., Mi... 阅读全帖
p*******t
发帖数: 501
10
【 以下文字转载自 Economics 讨论区 】
发信人: prescient (星辰大海), 信区: Economics
标 题: 问个maximum of simulated estimation问题
发信站: BBS 未名空间站 (Fri Mar 19 20:31:21 2010, 美东)
当在生成好simulator,计算出了对于每个choice的对应的likelihood function的值以
后,下一步就是去找parameter去maximize这个likelihood function的值,无论是用qu
asi-newton还是newton raphson,都要知道函数的first order或者hessian matrix的值
。在这种情况下,应该怎么去找这些的值呢?是不是简单的用倒数定义计算一下在那个
点的各个dimension的斜率就行了呢?
更规范的做法是什么呢?
K****n
发帖数: 5970
11
来自主题: Computation版 - probit regression一问
请问有没有现成的教材把maximum likelihood的导数求出来的? 我想对一下,网上居然
google不出来... 我不太放心matlab里的glm方程之类的,那个training的时候震荡比较
大.
另外如果对probit方程的参数设一个gaussian prior,然后求bayesian的
P(data)=Integrate(P(data|parameter)*P(parameter),over parameter)
好像这里用probit方程作P(data|parameter),用Gaussian作P(parameter),在optimize
bayeisan likelihood的时候比较好算?不知道有没有人已经算过?又google不出来...
谢谢!
p*******t
发帖数: 501
12
来自主题: Economics版 - 问个maximum of simulated estimation问题
当在生成好simulator,计算出了对于每个choice的对应的likelihood function的值以
后,下一步就是去找parameter去maximize这个likelihood function的值,无论是用qu
asi-newton还是newton raphson,都要知道函数的first order或者hessian matrix的值
。在这种情况下,应该怎么去找这些的值呢?是不是简单的用倒数定义计算一下在那个
点的各个dimension的斜率就行了呢?
更规范的做法是什么呢?
q****i
发帖数: 6923
13
来自主题: Macromolecules版 - 请教一个很难的和分子量有关的计算
有一个低聚物,是用A-A和B-B以polycondesation的方法合成的。单体A-A中含有一个
环,可以简单的表达成A-C-A,由于会有少量的C(<3%)会开环生成D,所以聚合反应中
有D-A3,由于D-A3中有3个官能团A,因此在聚合过程中会导致支化。
可用实验的方法用GPC和NMR(端基分析法测A的含量)测分子量。并且可以用红外的方法
测定D在C中的含量,请问可以用这些结果来估计likelihood of interchain coupling 吗?
这里 likelihood of interchain coupling 解释为交联度是否准确?
o****o
发帖数: 8077
14
来自主题: Mathematics版 - MatLab符号运算中如何表示连乘prod
比如算likelihood(不是log-likelihood)函数的导数:
L(\Theta; x)=\prod_{t=1}^T f(x_t;\Theta_t);T=11;
难道11个密度函数都要连着写出来?
能不能像如上所写表示出来?
谢谢
O********9
发帖数: 59
15
来自主题: Mathematics版 - 这两天可真发愁阿.....
没弄明白你模型是什么。你说你的mean是Gaussian分布的,variance是Gamma分布的,
然后又说mean和variance是Normal-Inv-X^2分布。什么意思?
不过你的问题听上去是参数估计。一般有这么几种解法:
1. 设数据是y,待求参数是a。计算marginal likelihood function: f(y|a)。然后用
最大似然求a。当然marginal likelihood function不一定好求。
2. 如果模型复杂,有中间变量,可以试试expectation maximization (EM)。
3. 还可以用Gibbs sampling。特别你都是用conjugate prior的结构,那应该很容易推
导出Gibbs sampler。
4. 还可以试试variational method求参数的后验概率。
你可以看看这本书: Pattern Recognition and Machine Learning。很经典的参考书
。大部分篇幅是介绍Bayesian inference及其在machine learning 里面的应用。
e*******e
发帖数: 1144
16
来自主题: Quant版 - 什么是bayesian optimization?
Bayesian Optimization实际上是比较奇怪的东西
因为真正的Bayesian不应该做Optimization
具体如下(个人见解):
==========================================================
是否使用prior不是区分Bayesian和Frequentist的关键
他们真正的区别是 是否相信存在一个(未知的)真参数
Bayesian把likelihood和prior放一起作为参数的posterior
Frequentist把lost和regularization penalty放一起作为参数的regularized loss
这两个框架基本是一一对应的:
一个likelihood对一个lost
一个prior对一个regularization penalty
而Bayesian和Frequentist真正的区别是之后做什么
Bayesian假设不存在“真正”的参数(比如你要估计那个均值\mu),只有一个
posterior distribution over \mu。所以真正的Bayesian不会去做MAP(M
t*****j
发帖数: 1105
17
来自主题: Quant版 - one interview question
嗯,我想问问关于 maximum likelihood方法估算值的 mean square error的讨论和计
算。
我个人感觉,只是感觉哈,就是maximum likelihood方法的估计值和真实值的var会比
较大,好的时候可以估计很准确,但也有可能比较差。用期望或者方差估计的方法不一
定很准确,但是不会太差。不过这个只是我的直觉。
o******e
发帖数: 1001
18
来自主题: Quant版 - 问一个calibration 的问题
目前我手头有一堆数据,现在想用OU模型和它的一个修改模型用Maximum Likelihood
Estimation进行拟合,除了要考虑maximum likelihood之外,还有什么方法能分辨哪个
模型更好?要考虑模型残差的normality吗?谢谢!
y**********0
发帖数: 425
19
来自主题: Quant版 - 【Martingale】一个问题
从Qd转换到Qf
likelihood ratio is d(Ld)=Ld*ud(t)*dWp(dWp is Brownian Motion under
P(real)
martingale)
d(Lf)=Lf*uf(t)*dWp
从Qd到Qf的
L=Lf/Ld
那么 dL=L*(uf(t)-ud(t))*dWp
但是这里就有问题了,从Qd转换到Qf时上面的dWp应该是dWd吧(under Qd martingale)
从Qd到Qf的likelihood ratio的kernel 应该是uf(t)-ud(t)对吧,书上是这么说的,但
是得到的公式却出现了dWp而不是dWd。问题在哪里呢?谢谢。
r**a
发帖数: 536
20
来自主题: Quant版 - 【Martingale】一个问题

the

instead
You can check that L_f/L_d is the likelihood process corresponding changing
measure from Q_d to Q_f. We all know that the likelihood process is a
martingale, so there should exist \phi(t) such that d(L_f/L_d)=(L_f/L_d)*\
phi(t)*dW_d. The key is what this \phi is.
and the \phi(t)here should be uf-ud, right?
Yes, \phi(t)=uf-ud. I got this from Ito formula calculation. I do not know
how to get it directly from d(L_f/L_d)=(L_f/L_d)*\phi(t)*dW_d. The point
here is that if you plug \ph... 阅读全帖
c****c
发帖数: 29
21
Maximum likelihood estimation 通常是假设残差是正态分布的,quasi MLE可以估计
残差是任何分布?那在likelihood function上有什么区别呢?
有大侠可以讲讲吗?或者推荐相关的书籍?谢谢~~研究Bollershev&Woodridge(1992),
看的好晕。。。
n****e
发帖数: 2401
22
来自主题: Quant版 - 猎头问现在薪水的问题
Recruiter wants to get the minimum package for you so that the client is
more likely to close the deal. Then how to make you more likely to accept
this minimum package?
Here is the trick. The recruiter has to know your current package, so that
he can estimate the likelihood of the deal.
If your current package is low, the recruiter will expect to close a quick
and easy deal, so he will be more willing to present you because most likely
you will accept a better package slightly over your current ... 阅读全帖
f*******n
发帖数: 588
23
来自主题: Sociology版 - probit 为什么给出的时 z statistic?
呵呵,还是去统计版问一下吧。一般来说,任何一种regression
的参数估计,都有理论证明其究竟屈从于哪种分布。
Maximum Likelihood是相对于Least square而言的另一种求极值的
方法。因为这种方法通常假定随机变量屈从于z分布,所以得出的是
z statistic。但如果要严格一点去深究的话,我觉得还是需要证明。

likelihood
m********1
发帖数: 368
24
I think the suggestion of jsdagre on the #4 floor above is better.
My original idea is exactly same as yours, however, it has a problem: \alpha
_0=g(\pi_0) is already given. So the only parameter to estimate is \alpha_1=
\beta_1.
To use fisher-scoring and find a solution to maximize the defined log likelihood is
more straight forward, I think. Only need to define the log likelihood function and PROC NLMIXED will solve it for you.
h****o
发帖数: 119
25
来自主题: Statistics版 - 怎样比较hierarchical model
如果现在有两个MODEL, 它们是HIERARCHICAL的,怎么用LIKELIHOOD RATIO TEST 来判断
复杂的模型是否比简单的模型有显著性提高呢? 现在关键是不知道如何用R 或SAS 来得
到模型的LIKELIHOOD SCORE. 请教这里的朋友,谢谢!
W**********E
发帖数: 242
26
来自主题: Statistics版 - 问个P-VALUE的问题
Likelihood ratio test example:
H0: var1=0 likelihood ratio chi-square test statistics is 6.6976 with df=1.
So 1-pchisq(6.697,1)= 0.0093.
However, this example gets a p-value of 0.0047 by dividing 0.0093 with 2.
I am not sure why we need to divide a 2 here. Variance is always positive
and Ha: var1>0 justifies a one tailed test only, right?
Any help will be appreciated.
c******n
发帖数: 590
27
来自主题: Statistics版 - 什么是统计 - 兼谈找工作
应该也算一种likelihood吧,只不过是partial likelihood.
f***a
发帖数: 329
28
luo tie chu lai ba ...
##### Cell growth program for Nested Sampling ####
#In 2-D parameter space, expand from a starting point
#until all points on boundary reach threshold likelihood.
###################
rm(list=ls(all=T))
###################
###### Functions (run once)#######
LL <- function(mu,sigma){sum(log(dnorm(dat,mu,sigma)))}
#log-likelihood function
walk <- function(p){
p.x <- p.nx <- p.y <- p.ny <- NULL
if(p$x==0) p.x <- c(x=0,y=0,nx=1,ny=0,a=p$a+da,b=p$b)
if(p$nx==0) p.nx <- c(x=1,y=0
s*****r
发帖数: 790
29
are you sure?
Maximal likelihood estimate, there is a word 'likelihood' in it, which means
it requires distribution. Just under the normality assumption, it is OLS.
Otherwise, it may not be.
D*******a
发帖数: 207
30
来自主题: Statistics版 - 请教高手 gaussian统计知识一问
这个MLE的推导是简单的。我下面试试看,不一定对。用的latex语法,\approx是“正
比于”
Likelihood
\approx(\sigma^{-n})exp{-1/(2\sigma^2)\sum{(x_i-mean)^2}}
\approx(\sigma^{-n})exp{-1/(2\sigma^2)\sum{(x_i-\bar(x))^2+(\bar(x)-mean)^2}}
\aporox(\sigma^{-n})exp{-1/(2\sigma^2)\sum{(x_i-\bar(x))^2}}
令对\sigma的导数为0可以maximize likelihood。似乎答案和1)中的variance公式一致
j*****e
发帖数: 182
31
来自主题: Statistics版 - proportion test
I forgot to mention that some of the p-values associated with the intercept
estimates in your SAS output are useful, too. These tests are Wald tests.
If you want to perform likelihood ratio test, you have to combine categories
, refit the model to the data, and pull out the -2log likelihood values by
yourself.
s**********e
发帖数: 63
32
Bayesian 的可以通过matlab 实现:
Step 1: Gibbs Sampling - take marginal probability distribution of your
Estimate Result and run Gibbs Sampler. Starting value - Your OLS result or
whatever model you used to estimate the parameters.
Step 2: Get your posterior and and use Chibs method to yield the Log
Marginal (LME) Likelihood of each model. Take the exponents of the LME, then
you would derive your Marginal likelihood (ML). (Simply taking the
exponents of the LME), Add the ML of all of your models, they s
g******h
发帖数: 266
33
来自主题: Statistics版 - 如何用SAS Macro来计算这个公式?
想用SAS计算一个likelihood公式,是人工选Box-Cox最佳lambda用的(intentionally
不用transreg procedure)。lambda应改是个循环变量。
Likelihood(lambda)=-n/2[1/nSum(X^lambda-mean(X^lambda)^2)]+(lambda-1)sum(lnX)
我不太知道怎么样把计算出来的多个变量统计数值放到macro variable中,然后循环调
用。对macro也不是很熟。 我的数据不是单变量。有X1,X2两个变量。要分别对这两个
变量找最佳lambda。我想从-5to5 with step 0.05 for trying out the best lambda.
哪位高手对macro熟的给个指导。非常感谢。
部分数据如下:
Obs X1 X2
1 47.4 2.05
2 35.8 1.02
3 32.9 2.53
4 1508.5 1.23
5 1217.4
A*******s
发帖数: 3942
34
来自主题: Statistics版 - 如何用SAS Macro来计算这个公式?
my 2 cents, correct me if i were wrong:
1. transpose X dataset. Then you have two rows and 10 columns. You can use
functions in data-step, which I think is more efficient than proc sql
functions.
2. Then use call symput routine to pass the value of Likelihood(lambda) to
macro variable for each row.
3. Iterate the calculation with a %do loop, like this(there may be some
syntax errors):
%do lambda=-5 %to 5 %by 0.05;
%let i=1;
data transposed;
set transposed;
likelihood=(your function);
call sy
z*******9
发帖数: 167
35
来自主题: Statistics版 - 对于连续变量的Bayesian analysis
一般在计算的时候会对likelihood取对数
这样即便一个likelihood是10^-100,取log后也没很小
a*********r
发帖数: 139
36
Likelihood function is a sufficient statistic just as the whole sample. (
This shows that sufficient statistic always exists.) However, the motivation
of finding a sufficient statistic is "to shrink the sample X without losing
information." Thus, likelihood function is pretty much useless in this
sense.
Hope this helps.
I*****a
发帖数: 5425
37
I dont know.
Do we say a likelihood function is sufficient ?
Is a likelihood function a function of parameters ?
a*********r
发帖数: 139
38
The likelihood function is fully specificied by the sample. Considered as a function L_{X}(\cdot), it is a statistic. Note theta is a
dummy variable in the likelihood function. I know lots of people get confused about this.
I*****a
发帖数: 5425
39
I don't quite think so.
Likelihood is a function of parameters, no matter you think in frequencists'
way or a Bayesian way. This has been well established in many textbooks, as
well as wikipedia.
Concept-wise, likelihood is not the same as "joint density/distribution", al
though mathematically they are equal.
Z = x^2 + y
It is a quadratic as a function of x, but not so as a function of y.

a function L_{X}(\cdot), it is a statistic. Note theta is a
confused about this.
I*****a
发帖数: 5425
40
and ok, I may try to ask some professors tomorrow or the day after about
suffici
ent likelihood.
I don't believe PROFESSORS that much though. I believe textbooks a lot more,
especially on concepts. But i will try. I will learn stuffs either way.

dummy variable, you can call it theta, beta, t, s, whatever.
if we allow h=1 which is always legal? Yes. We can always let h=1 in the
Factorization Theomrem and get likelihood function which is a sufficient
statistic.
lots of people here do not even ful... 阅读全帖
a*********r
发帖数: 108
41
You are correct. Avidswimmer is conceptually wrong indeed. Likelihood is
never considered as a statistic.

statistic
likelihood
it,
sample
th
textb
a*********r
发帖数: 139
42
With respect, actually you are conceptually wrong. Can you point out why I'm
wrong? Can you answer what happens when h=1 in the Factorization Theorem?
The only restriction for h is that h is a Borel-measurable function and it
does not depend on the parameters. Obviously, the constant function 1 is
always a valid choice.
Likelihood function can always be considered as a statistic. When we talk
about likelihood function, we fix x's and consider it as a function of the
parameter theta. Thus, theta ... 阅读全帖
a******n
发帖数: 11246
43
非常感谢啊niu兄的分享。
关于logistic regression的参数估计,如果是我的话,
我估计会说,这个reg相当于把Y通过logit transform后
变成新的Y',然后Y'关于Xb regress。就用普通的least square
method就行了。大牛们说说这样能行吗?
关于MLE怎么做,我觉得就是把likelihood function写出来,求
最大值。那个NR method是算法的问题了...但是理论上
无非就是求likelihood的最大值。
另外看到提到了dreamer同学,在本版搜索了一下,才
发现原来几个月前dreamer就发过面经了。看了一下,觉得
好难啊。有好多我都觉得我无法给出肯定的回答:(
B****n
发帖数: 11290
44

If you said wald, likelihood in this question, you will have big trobule.
These methods are for hypothesis testing; MLE or quasi likelihood estimator
are for estimation of parameters.
they
good
o******e
发帖数: 1001
45
来自主题: Statistics版 - 还是MLE分布拟合问题
这种情况是可能发生的。
给定数据A=[a_1,a_2,a_3],假设X模型为:
X: a_3=e_x*a_1+f_x*a_2+g_x*N(0,1)
我们用maximum likelihood estimate 参数e_x,f_x,g_x时候是maximize:
1/(sqrt(2*PI)*g_x)*exp(-(a_3-e_x*a_1-f_x*a_2)^2/(2*g_x^2))
当我们得到e_x,f_x,g_x时候,我们有N(0,1)=(a_3-e_x*a_1-f_x*a_2)/g_x,我们把它和
N(0,1)比较求likelihood value是用这个式子的:
1/sqrt(2*PI)*exp(-(a_3-e_x*a_1-f_x*a_2)^2/(2*g_x^2))
就是说说少了g_x,这导致模型比较的时候它们的值不一致。

F)
始用
o******e
发帖数: 1001
46
来自主题: Statistics版 - MLE 问题again
前面两个帖子把问题搞得越来越复杂,我现在想想了,这个我问题可以简单一点描述。
假定一列数据X,如果假定为正态分布N(mu, sigma^2),我们可以直接用maximum
likelihood estimation预测。
第二种方法是这样的。因为X=N(mu,simga^2)=mu+sigma*N(0,1),也就是说(X-mu)/sigma
应该符合标准正态分布,我们也可以用maximum likelihood estimation预测参数。
但是这两种预测的参数值是不一样的,应该选择哪种方法?我倾向第一种,但是不知道
理论上为什么第二种不行。谢谢!
z****g
发帖数: 1978
47
来自主题: Statistics版 - MLE 问题again
你MLE还没入门
MLE的本质是最大熵,大样本时log-likelihood function是逼近分布的熵的,这个也是
真实世界里的终极规律。所以虽然一般来说log-likelihood function直接用的是density
function,但是这个是连续变量的情况。一般情况下应该是distribution function的微分,所以
你不能直接把变换过以后的数值带到standard normal distribution的density里
你那个残差的概念,只不过是入门。
统计两大估计方法:MLE类和Momentum类,第一类是直接基于最大熵的,第二类是基于
分布的Momentum Generating function的Taylor逼近。

N(
A*******s
发帖数: 3942
48
来自主题: Statistics版 - 请问一个ROC AUC 问题?
ranking loss 不一定等于 likelihood/entropy loss
你加多一个变量,likelihood必然是增加的
但是AUC就不一定了。
z**********i
发帖数: 12276
49
来自主题: Statistics版 - 一个理论题
因为我的实际DATA是BOUNDED COUNT,所以,普遍的观点是BETA BINOMIAL要优于NEGATIVE
BINOMIAL.我在用NLMIXED做的过程中,发现BETA BINOMIAL非常难CONVERGE,最后,虽然,
GRADIENT还很大,但也勉强算是过了,因为2个MODEL的ESTIMATES已经非常接近了.我得到
了AIC和-2LL,可以用LIKELIHOOD RATIO来说BB好.
我想再理论一点,给出它们的LOG LIKELIHOOD FUNCTION,然后,阐述为什么BB好.这个对
我是真正的难点.
多谢大家的热情帮助!!
**************************************
Since you have very different assumption of the data, I am suspecting
if the usual statistical methods will be working here.
Can we try the goodness of fit respectively?
Test the hy... 阅读全帖
首页 上页 1 2 3 4 5 6 7 8 9 10 下页 末页 (共10页)