holding 释义:
1、用作名词的意思:n.股份;私有财产;(博物馆、图书馆等的)馆藏;租种的土地
2、用作动词的意思:v.拿着;抓住;抱住;托住;捂住,按住(受伤的身体部位等);使保持(在某位置)
3、holding是动词hold的现在分词形式.
4、holding读音:英 ['həʊldɪŋ] 美 ['hoʊldɪŋ]
5、词汇搭配:(1)holding company 控、公司(2)holding time 占用时间(3)holding pond 存贮池
6.双语例句:She has a 40% holding in the company.她持有公司 40%25 的股份。
举办
n.股份;私有财产;(博物馆、图书馆等的)馆藏;租种的土地
v.拿着;抓住;抱住;托住;捂住,按住(受伤的身体部位等);使保持(在某位置)
hold的现在分词
holding是名词,动词词性
n.
股份;私有财产;(博物馆、图书馆等的)馆藏;租种的土地;
v.
拿着;抓住;抱住;托住;捂住,按住(受伤的身体部位等);使保持(在某位置);
词典
hold的现在分词;
例句
Gucci will be holding fashion shows to present their autumn collection
古奇将举办时装发布会推出他们的秋装系列。
变形
原型hold复数holdings
Holding back the tears ---By东方神起 变得模糊花白的图画 和完全被抹掉的我的香气 在耀眼的云彩中被遮掉了... 什么话都没有的我的心 慢慢地转移了 手上只有从那之间逝去的时间 I'm holding back the tears 轻松地带着我的心走着 在不近不远的地方 会有另一个我站着 我不哭了 又一次合上两只手 在某个不一样的地方 不是回忆 而是生活着现在 即使像傻瓜一样也一直在一起 想逃离那痛苦 我的泪水从全身流过 干涸了 I'm living with my tears 轻松地带着我的心走着 在不近不远的地方 会有另一个我站着 我不哭了 I'm holding back the tears 沉重地带着我的信念跑着 在不高不低的地方 又有另一个我站着 我会用轻轻的声音哭的 그리고…(Holding Back The Tears) (동방신기) 하얗게 흐려진 그림과 지워진듯한 내 향기가 눈부신 구름 속에 가려져요 아무 말 없는 내 가슴이 천천히 맘을 옮겨보고 그 사이로 스쳐간 시간만 손에 놓여져 있어요 I'm holding back the tears 무겁지 않게 나의 마음을 매고 걸어요 가깝진 않고 멀지 않은 곳에 다른 내가 서있죠 난 울지 않아요 또 다시 두 손을 모으죠 어딘가 들릴 그 곳에 추억이 아닌 지금을 난 살아가요 바보 같지만 늘 함께 있어요 비우고 싶은 그 아픔이 온몸으로 흐르는 내 눈물을 마르게 하죠 I'm living with my tears 무겁지 않게 나의 마음을 매고 걸어요 가깝진 않고 멀지 않은 곳에 다른 내가 서있죠 울지 않아요 난 I'm holding back the tears 가볍지 않게 나의 믿음을 매고 뛰어요 높지도 않고 낮지 않은 곳에 또 다른 내가 서 있죠 작은 미소로 난 웃을 수 있죠 Hayake heuryeojin geurimgwa jiweojindeuthan nae hyanggiga nunbushin gureum soge garyeojyeoyo Amu mal eomneun nae gaseumi cheoncheonhi mameul omgyeobogo geu sa-iro seuchyeogan shiganman sone nohyeojyeo isseoyo I'm holding back the tears mugeopjji anke na-ui ma-eumeul maego georeoyo gakkapjjin anko meolji anheun gose dareun naega seo-itjjyo nan ulji anhayo Tto dashi du soneul mo-eujyo eodinga deullil geu gose chu-eogi anin jigeumeul nan saragayo Babo gatjjiman neul hamkke isseoyo bi-ugo shipeun geu apeumi onmomeuro heureuneun nae nunmureul mareuge hajyo I'm living with my tears mugeopjji anke na-ui ma-eumeul maego georeoyo gakkapjjin anko meolji anheun gose dareun naega seo-itjjyo ulji anhayo nan I'm holding back the tears gabyeopjji anke na-ui mideumeul maego ttwi-eoyo nopjjido anko natjji anheun gose tto dareun naega seo itjjyo jageun misoro nan euseul su itjjyo
ff top holding是法拉第未来公司,总部位于加州的电动汽车初创公司,法拉第未来(NASDAQ:ff top holding)为其旗舰产品FF 91电动轿车的推出筹集了1亿美元,在创始人贾跃亭的领导下重组了公司董事会。
法拉第未来(ff top holding),也被称为FF,是FF Global Partners LLC间接拥有的子公司,该公司由二十几个FF全球合伙人和FF前高管所有,拥有法拉第未来超过20%的股份和大约36%的投票权。
cccg holding是央企,中交国际(香港)控股有限公司简中交国际,是中国交通建设股份有限公司的境外子公司,公司总部位于香港。中交国际作为中国交建海外投资业务的责任主体和主要融资平台,负责中国交建境外资产的股权并购、重组,以及海外基础设施投资、建设和资产管理等业务。
没有,holding意思是保持、掌握,例如:
1.Gucci will be holding fashion shows to present their autumn collection
古奇将举办时装发布会推出他们的秋装系列。
2.The Foundation is holding a dinner in honour of something or other
基金会正举办某个纪念宴会。
holding company 英[ˈhəʊldɪŋ ˈkʌmpəni] 美[ˈholdɪŋ ˈkʌmpəni] [释义] 控股公司; [例句]Moody's said its higher rating was due to the way the holding company is structured.穆迪说,野村证券的评级高于野村控股原因在于野村控股的公司架构。复数:holding companies
holding英 ['həʊldɪŋ]美 ['holdɪŋ]n. 举办;支持v. 召开;担任(hold的ing形式);握住n. (Holding)人名;(英)霍尔丁
之前看了Mahout官方示例 20news 的调用实现;于是想根据示例的流程实现其他例子。网上看到了一个关于天气适不适合打羽毛球的例子。
训练数据:
Day Outlook Temperature Humidity Wind PlayTennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Strong Yes
D8 Sunny Mild High Weak No
D9 Sunny Cool Normal Weak Yes
D10 Rain Mild Normal Weak Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
检测数据:
sunny,hot,high,weak
结果:
Yes=》 0.007039
No=》 0.027418
于是使用Java代码调用Mahout的工具类实现分类。
基本思想:
1. 构造分类数据。
2. 使用Mahout工具类进行训练,得到训练模型。
3。将要检测数据转换成vector数据。
4. 分类器对vector数据进行分类。
接下来贴下我的代码实现=》
1. 构造分类数据:
在hdfs主要创建一个文件夹路径 /zhoujainfeng/playtennis/input 并将分类文件夹 no 和 yes 的数据传到hdfs上面。
数据文件格式,如D1文件内容: Sunny Hot High Weak
2. 使用Mahout工具类进行训练,得到训练模型。
3。将要检测数据转换成vector数据。
4. 分类器对vector数据进行分类。
这三步,代码我就一次全贴出来;主要是两个类 PlayTennis1 和 BayesCheckData = =》
package myTesting.bayes;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.util.ToolRunner;
import org.apache.mahout.classifier.naivebayes.training.TrainNaiveBayesJob;
import org.apache.mahout.text.SequenceFilesFromDirectory;
import org.apache.mahout.vectorizer.SparseVectorsFromSequenceFiles;
public class PlayTennis1 {
private static final String WORK_DIR = "hdfs://192.168.9.72:9000/zhoujianfeng/playtennis";
/*
* 测试代码
*/
public static void main(String[] args) {
//将训练数据转换成 vector数据
makeTrainVector();
//产生训练模型
makeModel(false);
//测试检测数据
BayesCheckData.printResult();
}
public static void makeCheckVector(){
//将测试数据转换成序列化文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"testinput";
String output = WORK_DIR+Path.SEPARATOR+"tennis-test-seq";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean参数是,是否递归删除的意思
fs.delete(out, true);
}
SequenceFilesFromDirectory sffd = new SequenceFilesFromDirectory();
String[] params = new String[]{"-i",input,"-o",output,"-ow"};
ToolRunner.run(sffd, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("文件序列化失败!");
System.exit(1);
}
//将序列化文件转换成向量文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"tennis-test-seq";
String output = WORK_DIR+Path.SEPARATOR+"tennis-test-vectors";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean参数是,是否递归删除的意思
fs.delete(out, true);
}
SparseVectorsFromSequenceFiles svfsf = new SparseVectorsFromSequenceFiles();
String[] params = new String[]{"-i",input,"-o",output,"-lnorm","-nv","-wt","tfidf"};
ToolRunner.run(svfsf, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("序列化文件转换成向量失败!");
System.out.println(2);
}
}
public static void makeTrainVector(){
//将测试数据转换成序列化文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"input";
String output = WORK_DIR+Path.SEPARATOR+"tennis-seq";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean参数是,是否递归删除的意思
fs.delete(out, true);
}
SequenceFilesFromDirectory sffd = new SequenceFilesFromDirectory();
String[] params = new String[]{"-i",input,"-o",output,"-ow"};
ToolRunner.run(sffd, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("文件序列化失败!");
System.exit(1);
}
//将序列化文件转换成向量文件
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"tennis-seq";
String output = WORK_DIR+Path.SEPARATOR+"tennis-vectors";
Path in = new Path(input);
Path out = new Path(output);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean参数是,是否递归删除的意思
fs.delete(out, true);
}
SparseVectorsFromSequenceFiles svfsf = new SparseVectorsFromSequenceFiles();
String[] params = new String[]{"-i",input,"-o",output,"-lnorm","-nv","-wt","tfidf"};
ToolRunner.run(svfsf, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("序列化文件转换成向量失败!");
System.out.println(2);
}
}
public static void makeModel(boolean completelyNB){
try {
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String input = WORK_DIR+Path.SEPARATOR+"tennis-vectors"+Path.SEPARATOR+"tfidf-vectors";
String model = WORK_DIR+Path.SEPARATOR+"model";
String labelindex = WORK_DIR+Path.SEPARATOR+"labelindex";
Path in = new Path(input);
Path out = new Path(model);
Path label = new Path(labelindex);
FileSystem fs = FileSystem.get(conf);
if(fs.exists(in)){
if(fs.exists(out)){
//boolean参数是,是否递归删除的意思
fs.delete(out, true);
}
if(fs.exists(label)){
//boolean参数是,是否递归删除的意思
fs.delete(label, true);
}
TrainNaiveBayesJob tnbj = new TrainNaiveBayesJob();
String[] params =null;
if(completelyNB){
params = new String[]{"-i",input,"-el","-o",model,"-li",labelindex,"-ow","-c"};
}else{
params = new String[]{"-i",input,"-el","-o",model,"-li",labelindex,"-ow"};
}
ToolRunner.run(tnbj, params);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("生成训练模型失败!");
System.exit(3);
}
}
}
package myTesting.bayes;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.PathFilter;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.mahout.classifier.naivebayes.BayesUtils;
import org.apache.mahout.classifier.naivebayes.NaiveBayesModel;
import org.apache.mahout.classifier.naivebayes.StandardNaiveBayesClassifier;
import org.apache.mahout.common.Pair;
import org.apache.mahout.common.iterator.sequencefile.PathType;
import org.apache.mahout.common.iterator.sequencefile.SequenceFileDirIterable;
import org.apache.mahout.math.RandomAccessSparseVector;
import org.apache.mahout.math.Vector;
import org.apache.mahout.math.Vector.Element;
import org.apache.mahout.vectorizer.TFIDF;
import com.google.common.collect.ConcurrentHashMultiset;
import com.google.common.collect.Multiset;
public class BayesCheckData {
private static StandardNaiveBayesClassifier classifier;
private static Map<String, Integer> dictionary;
private static Map<Integer, Long> documentFrequency;
private static Map<Integer, String> labelIndex;
public void init(Configuration conf){
try {
String modelPath = "/zhoujianfeng/playtennis/model";
String dictionaryPath = "/zhoujianfeng/playtennis/tennis-vectors/dictionary.file-0";
String documentFrequencyPath = "/zhoujianfeng/playtennis/tennis-vectors/df-count";
String labelIndexPath = "/zhoujianfeng/playtennis/labelindex";
dictionary = readDictionnary(conf, new Path(dictionaryPath));
documentFrequency = readDocumentFrequency(conf, new Path(documentFrequencyPath));
labelIndex = BayesUtils.readLabelIndex(conf, new Path(labelIndexPath));
NaiveBayesModel model = NaiveBayesModel.materialize(new Path(modelPath), conf);
classifier = new StandardNaiveBayesClassifier(model);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("检测数据构造成vectors初始化时报错。。。。");
System.exit(4);
}
}
/**
* 加载字典文件,Key: TermValue; Value:TermID
* @param conf
* @param dictionnaryDir
* @return
*/
private static Map<String, Integer> readDictionnary(Configuration conf, Path dictionnaryDir) {
Map<String, Integer> dictionnary = new HashMap<String, Integer>();
PathFilter filter = new PathFilter() {
@Override
public boolean accept(Path path) {
String name = path.getName();
return name.startsWith("dictionary.file");
}
};
for (Pair<Text, IntWritable> pair : new SequenceFileDirIterable<Text, IntWritable>(dictionnaryDir, PathType.LIST, filter, conf)) {
dictionnary.put(pair.getFirst().toString(), pair.getSecond().get());
}
return dictionnary;
}
/**
* 加载df-count目录下TermDoc频率文件,Key: TermID; Value:DocFreq
* @param conf
* @param dictionnaryDir
* @return
*/
private static Map<Integer, Long> readDocumentFrequency(Configuration conf, Path documentFrequencyDir) {
Map<Integer, Long> documentFrequency = new HashMap<Integer, Long>();
PathFilter filter = new PathFilter() {
@Override
public boolean accept(Path path) {
return path.getName().startsWith("part-r");
}
};
for (Pair<IntWritable, LongWritable> pair : new SequenceFileDirIterable<IntWritable, LongWritable>(documentFrequencyDir, PathType.LIST, filter, conf)) {
documentFrequency.put(pair.getFirst().get(), pair.getSecond().get());
}
return documentFrequency;
}
public static String getCheckResult(){
Configuration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
String classify = "NaN";
BayesCheckData cdv = new BayesCheckData();
cdv.init(conf);
System.out.println("init done...............");
Vector vector = new RandomAccessSparseVector(10000);
TFIDF tfidf = new TFIDF();
//sunny,hot,high,weak
Multiset<String> words = ConcurrentHashMultiset.create();
words.add("sunny",1);
words.add("hot",1);
words.add("high",1);
words.add("weak",1);
int documentCount = documentFrequency.get(-1).intValue(); // key=-1时表示总文档数
for (Multiset.Entry<String> entry : words.entrySet()) {
String word = entry.getElement();
int count = entry.getCount();
Integer wordId = dictionary.get(word); // 需要从dictionary.file-0文件(tf-vector)下得到wordID,
if (StringUtils.isEmpty(wordId.toString())){
continue;
}
if (documentFrequency.get(wordId) == null){
continue;
}
Long freq = documentFrequency.get(wordId);
double tfIdfValue = tfidf.calculate(count, freq.intValue(), 1, documentCount);
vector.setQuick(wordId, tfIdfValue);
}
// 利用贝叶斯算法开始分类,并提取得分最好的分类label
Vector resultVector = classifier.classifyFull(vector);
double bestScore = -Double.MAX_VALUE;
int bestCategoryId = -1;
for(Element element: resultVector.all()) {
int categoryId = element.index();
double score = element.get();
System.out.println("categoryId:"+categoryId+" score:"+score);
if (score > bestScore) {
bestScore = score;
bestCategoryId = categoryId;
}
}
classify = labelIndex.get(bestCategoryId)+"(categoryId="+bestCategoryId+")";
return classify;
}
public static void printResult(){
System.out.println("检测所属类别是:"+getCheckResult());
}
}