{"id":779,"date":"2024-05-26T21:48:18","date_gmt":"2024-05-26T13:48:18","guid":{"rendered":"http:\/\/madapapa.com\/wordpress\/?p=779"},"modified":"2024-05-26T21:55:41","modified_gmt":"2024-05-26T13:55:41","slug":"wen-ben-xiang-liang-hua-he-ting-zhi-ci","status":"publish","type":"post","link":"http:\/\/madapapa.com\/wordpress\/?p=779","title":{"rendered":"\u6587\u672c\u5411\u91cf\u5316\u548c\u505c\u6b62\u8bcd"},"content":{"rendered":"<p>\u6211\u5728\u51c6\u5907\u6587\u672c\u5411\u91cf\u5316\u8fc7\u7a0b\u4e2d\uff0c\u53d1\u73b0\u6709\u51e0\u4e2a\u5355\u8bcd\u6ca1\u6709\u4f5c\u4e3a\u7279\u5f81\u88ab\u63d0\u53d6\u51fa\u6765,\u6bd4\u5982\u8bf4\u201cI\u201d<\/p>\n<p>ChatGPT\u56de\u7b54\uff0c\u8fd9\u5e94\u8be5\u662f\u5411\u91cf\u5de5\u5177\u6709\u4e00\u5957\u9ed8\u8ba4\u7684\u82f1\u6587\u505c\u6b62\u8bcd\uff0c\u4f8b\u5982i\uff0cthe\u7b49\u7b49\uff0c\u5982\u679c\u8981\u786e\u4fdd\u53bb\u9664stop words\u7684\u5f71\u54cd\uff0c\u53ef\u4ee5\u5728\u65b9\u6cd5\u4e2d\u6dfb\u52a0\u4e00\u4e2a\u53c2\u6570\u3002<br \/>\nThe word &quot;I&quot; is missing from the feature names because CountVectorizer by default removes English stop words, which are common words like &quot;I&quot;, &quot;the&quot;, &quot;is&quot;, etc., that are often filtered out because they do not contain significant meaning in the context of text analysis.<\/p>\n<pre><code class=\"language-plain_text\">from sklearn.feature_extraction.text import CountVectorizer\n\n# Sample text data\ndocuments = [\n    &quot;I love programming in Python&quot;,\n    &quot;Python is a great language&quot;,\n    &quot;I love coding&quot;\n]\n\n# Create an instance of CountVectorizer without stop words removal\n# \u6ce8\u610f\uff01\uff01\uff01\u5728\u8fd9\u91cc\u7684\u53c2\u6570\u610f\u5473\u4e0d\u8981\u4f7f\u7528\u505c\u6b62\u8bcd\uff01\uff01\uff01\nvect = CountVectorizer(stop_words=None)\n\n# Fit and transform the data\nX = vect.fit_transform(documents)\n\n# Convert to dense array\nX_dense = X.toarray()\n\n# Get feature names (tokens)\nfeature_names = vect.get_feature_names_out()\n\n# Print feature names and the dense array for verification\nprint(&quot;Feature names:&quot;, feature_names)\nprint(&quot;Dense array:\\n&quot;, X_dense)\n\n# Sum the counts of each token across all documents\ntoken_counts = X_dense.sum(axis=0)\n\n# Create a dictionary of tokens and their counts\ntoken_count_dict = dict(zip(feature_names, token_counts))\n\n# Print the token counts\nfor token, count in token_count_dict.items():\n    print(f&quot;{token}: {count}&quot;)\n\n<\/code><\/pre>\n<p>\u4e0b\u9762\u662f\u65b0\u7684\u8f93\u51fa\u7ed3\u679c<\/p>\n<pre><code class=\"language-plain_text\">Feature names: ['coding' 'great' 'i' 'in' 'is' 'language' 'love' 'programming' 'python']\nDense array:\n [[0 0 1 1 0 0 1 1 1]\n  [0 1 0 1 1 1 0 0 1]\n  [1 0 1 0 0 0 1 0 0]]\ncoding: 1\ngreat: 1\ni: 2\nin: 1\nis: 1\nlanguage: 1\nlove: 2\nprogramming: 1\npython: 2\n\n<\/code><\/pre>\n","protected":false},"excerpt":{"rendered":"<p>\u6211\u5728\u51c6\u5907\u6587\u672c\u5411\u91cf\u5316\u8fc7\u7a0b\u4e2d\uff0c\u53d1\u73b0\u6709\u51e0\u4e2a\u5355\u8bcd\u6ca1\u6709\u4f5c\u4e3a\u7279\u5f81\u88ab\u63d0\u53d6\u51fa\u6765,\u6bd4\u5982\u8bf4\u201cI\u201d ChatGPT\u56de\u7b54\uff0c\u8fd9\u5e94\u8be5\u662f\u5411\u91cf\u5de5\u5177\u6709\u4e00\u5957\u9ed8\u8ba4\u7684\u82f1\u6587\u505c\u6b62\u8bcd\uff0c\u4f8b\u5982i\uff0cthe\u7b49\u7b49\uff0c\u5982\u679c\u8981\u786e\u4fdd\u53bb\u9664stop words\u7684\u5f71\u54cd\uff0c\u53ef\u4ee5\u5728\u65b9\u6cd5\u4e2d\u6dfb\u52a0\u4e00\u4e2a\u53c2\u6570\u3002 The word &quot;I&quot; is missing from the feature names because CountVectorizer by default removes English stop words, which are common words like &quot;I&quot;, &quot;the&quot;, &quot;is&quot;, etc., that are often filtered out because they do not contain significant meaning in the context of text analysis. from sklearn.feature_extraction.text import CountVectorizer # Sample text data &hellip; <a href=\"http:\/\/madapapa.com\/wordpress\/?p=779\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">\u6587\u672c\u5411\u91cf\u5316\u548c\u505c\u6b62\u8bcd<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_bbp_topic_count":0,"_bbp_reply_count":0,"_bbp_total_topic_count":0,"_bbp_total_reply_count":0,"_bbp_voice_count":0,"_bbp_anonymous_reply_count":0,"_bbp_topic_count_hidden":0,"_bbp_reply_count_hidden":0,"_bbp_forum_subforum_count":0,"footnotes":""},"categories":[47],"tags":[],"class_list":["post-779","post","type-post","status-publish","format-standard","hentry","category-datascience"],"_links":{"self":[{"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/779","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=779"}],"version-history":[{"count":1,"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/779\/revisions"}],"predecessor-version":[{"id":780,"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=\/wp\/v2\/posts\/779\/revisions\/780"}],"wp:attachment":[{"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=779"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=779"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/madapapa.com\/wordpress\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=779"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}