نبذة مختصرة : 在日常生活和科学研究中产生大量偏好数据,其反应一组被关注对象受偏好的程度。通常用排序数据或多元选择数据来记录观察结果。有时候关于两个对象的偏好没有明显强弱之分,导致排序产生节点,也就是所谓的删失排序。为了研究带有删失的排序数据,基于Thurstone的随机效用假设理论我们建立了一个对称贝叶斯probit模型。然而,参数识别是probit模型必须解决的问题,即确定一组潜在效用的位置和尺度。通常方法是选择其中一个对象为基,然后用其它对象的效用减去这个基的效用,最后我们关于这些效用差来建模。问题是,在用贝叶斯方法处理多元选择数据时,其预测结果对基的选择有敏感性,即选不同对象为基预测结果是不一样的。本文,我们虚构一个基,即一组对象偏好的平均。依靠这个基,我们为多元选择probit模型给出一个不依赖于对象标号的识别方法,即对称识别法。进一步,我们设计一种贝叶斯算法来估计这个模型。通过仿真研究和真实数据分析,我们发现这个贝叶斯probit模型被完全识别,而且消除通常识别法所存在的敏感性。接下来,我们把这个关于多元选择数据建立的probit模型推广到处理一般删失排序数据,即得到对称贝叶斯删失排序probit 模型。最后,我们用这个模型很好的分析了香港赌马数据。 ; Vast amount of preference data arise from daily life or scientific research, where observations consist of preferences on a set of available objects. The observations are usually recorded by ranking data or multinomial data. Sometimes, there is not a clear preference between two objects, which will result in ranking data with ties, also called censored rank-ordered data. To study such kind of data, we develop a symmetric Bayesian probit model based on Thurstone's random utility (discriminal process) assumption. However, parameter identification is always an unavoidable problem for probit model, i.e., determining the location and scale of latent utilities. The standard identification method need to specify one of the utilities as a base, and then model the differences of the other utilities subtracted by the base. However, Bayesian predictions have been verified to be sensitive to specification of the base in the case of multinomial data. In this thesis, we set the average of the whole set of utilities as a base which is symmetric to any relabeling of objects. Based on this new base, we propose a symmetric identification approach to fully identify multinomial probit model. Furthermore, we design a Bayesian algorithm to fit that model. By simulation study and real data analysis, we find that this new probit model not only can be identifed well, but also remove sensitivities mentioned above. In what follows, we generalize this probit model to fit general censored rank-ordered data. ...
No Comments.