作者:矮辛楚楚拉_760 | 来源:互联网 | 2023-02-04 09:36
使用OCR工具我从截图中提取文本(每个约1-5个句子).但是,在手动验证提取的文本时,我注意到有时会出现几个错误.
鉴于文本"你好!我真的喜欢Spark❤️!",我注意到:
1)像"I","!"和"l"这样的字母被"|"代替.
2)Emojis未被正确提取并被其他字符替换或被遗漏.
3)不时删除空格.
结果,我可能会得到一个像这样的字符串:"你好7l |真实|喜欢Spark!"
因为我试图将这些字符串与包含正确文本的数据集相匹配(在这种情况下"Hello there!我真的很喜欢Spark❤️!"),我正在寻找一种有效的方法来匹配Spark中的字符串.
任何人都可以建议一个有效的Spark算法,它允许我比较提取文本(〜100.000)与我的数据集(约1亿)?
1> hi-zir..:
我不会首先使用Spark,但如果你真的致力于特定的堆栈,你可以结合一堆ml变换器来获得最佳匹配.你需要Tokenizer
(或split
):
import org.apache.spark.ml.feature.RegexTokenizer
val tokenizer = new RegexTokenizer().setPattern("").setInputCol("text").setMinTokenLength(1).setOutputCol("tokens")
NGram
(例如3克)
import org.apache.spark.ml.feature.NGram
val ngram = new NGram().setN(3).setInputCol("tokens").setOutputCol("ngrams")
Vectorizer
(例如CountVectorizer
或HashingTF
):
import org.apache.spark.ml.feature.HashingTF
val vectorizer = new HashingTF().setInputCol("ngrams").setOutputCol("vectors")
并且LSH
:
import org.apache.spark.ml.feature.{MinHashLSH, MinHashLSHModel}
// Increase numHashTables in practice.
val lsh = new MinHashLSH().setInputCol("vectors").setOutputCol("lsh")
结合 Pipeline
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tokenizer, ngram, vectorizer, lsh))
适合示例数据:
val query = Seq("Hello there 7l | real|y like Spark!").toDF("text")
val db = Seq(
"Hello there ! I really like Spark ??!",
"Can anyone suggest an efficient algorithm"
).toDF("text")
val model = pipeline.fit(db)
改变两者:
val dbHashed = model.transform(db)
val queryHashed = model.transform(query)
并加入
model.stages.last.asInstanceOf[MinHashLSHModel]
.approxSimilarityJoin(dbHashed, queryHashed, 0.75).show
+--------------------+--------------------+------------------+
| datasetA| datasetB| distCol|
+--------------------+--------------------+------------------+
|[Hello there ! ...|[Hello there 7l |...|0.5106382978723405|
+--------------------+--------------------+------------------+
在Pyspark中可以使用相同的方法
from pyspark.ml import Pipeline
from pyspark.ml.feature import RegexTokenizer, NGram, HashingTF, MinHashLSH
query = spark.createDataFrame(
["Hello there 7l | real|y like Spark!"], "string"
).toDF("text")
db = spark.createDataFrame([
"Hello there ! I really like Spark ??!",
"Can anyone suggest an efficient algorithm"
], "string").toDF("text")
model = Pipeline(stages=[
RegexTokenizer(
pattern="", inputCol="text", outputCol="tokens", minTokenLength=1
),
NGram(n=3, inputCol="tokens", outputCol="ngrams"),
HashingTF(inputCol="ngrams", outputCol="vectors"),
MinHashLSH(inputCol="vectors", outputCol="lsh")
]).fit(db)
db_hashed = model.transform(db)
query_hashed = model.transform(query)
model.stages[-1].approxSimilarityJoin(db_hashed, query_hashed, 0.75).show()
# +--------------------+--------------------+------------------+
# | datasetA| datasetB| distCol|
# +--------------------+--------------------+------------------+
# |[Hello there ! ...|[Hello there 7l |...|0.5106382978723405|
# +--------------------+--------------------+------------------+
有关
优化Spark作业,必须为每个条目相似度计算每个条目,并为每个条目输出前N个相似项目
我正在努力计算1000万到7000万行大小的桌子之间的levenshtein距离.那当然需要时间,这真的很多.我有两个问题:上面提到的算法有多快,如果不使用spark你会怎么做?