热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

使用位运算符快速字符串搜索-Faststringsearchusingbitwiseoperators

Whatisthefastest(parallel?)waytofindasubstringinaverylongstringusingbitwiseoperator

What is the fastest (parallel?) way to find a substring in a very long string using bitwise operators?

使用位运算符查找超长字符串中的子字符串的最快(并行?)方法是什么?

e.g. find all positions of "GCAGCTGAAAACA" sequence in a human genome http://hgdownload.cse.ucsc.edu/goldenPath/hg18/bigZips/hg18.2bit (770MB)

例如,找到人类基因组中“GCAGCTGAAAACA”序列的所有位置http://hgdownload.cse.ucsc.edu/goldenPath/hg18/bigZips/hg18.2bit (770MB)

*the alphabet consists of 4 symbols ('G','C',T,'A') represented using 2 bits: 'G':00, 'A':01, 'T':10, 'C':11

*字母表由4个符号组成(“G”、“C”、“T”、“A”),用2位表示:“G”:00、“A”:01、“T”:10、“C”:11

*you can assume the query string (the shorter one) is fixed in length, e.g. 127 characters

*您可以假设查询字符串(较短的那个)的长度是固定的,例如127个字符

*by fastest I mean not including any pre-processing/indexing time

*我说的最快指的是不包括任何预处理/索引时间

*the file is going to be loaded into memory after pre-processing, basically there will be billions of short strings to be searched for in a larger string, all in-memory.

*文件在预处理后将被加载到内存中,基本上在一个较大的字符串中有数十亿个短字符串需要搜索,所有都在内存中。

*bitwise because I'm looking for the simplest, fastest way to search for a bit pattern in a large bit array and stay as close as possible to the silicon.

*按位排列,因为我正在寻找最简单、最快的方法来在一个大的位数组中搜索位模式,并尽可能地靠近硅。

*KMP wouldn't work well as the alphabet is small

*KMP不能正常工作,因为字母表很小

*C code, x86 machine code would all be interesting.

*C代码,x86机器代码都很有趣。

Input format description (.2bit): http://jcomeau.freeshell.org/www/genome/2bitformat.html

输入格式描述(.2bit): http://jcomeau.freeshell.org/www/genome/2bitformat.html。

Related:

相关:

Fastest way to scan for bit pattern in a stream of bits

最快的方式扫描位流中的位模式

Algorithm help! Fast algorithm in searching for a string with its partner

算法的帮助!快速算法搜索字符串与它的伙伴

http://www.arstdesign.com/articles/fastsearch.html

http://www.arstdesign.com/articles/fastsearch.html

http://en.wikipedia.org/wiki/Bitap_algorithm

http://en.wikipedia.org/wiki/Bitap_algorithm

6 个解决方案

#1


5  

If you're just looking through a file, you're pretty much guaranteed to be io-bound. Use of a large buffer (~16K), and strstr() should be all you need. If the file is encoded in ascii,search just for "gcagctgaaaaca". If it actually is encoded in bits; just permute the possible accepted strings(there should be ~8; lop off the first byte), and use memmem() plus a tiny overlapping bit check.

如果您只是浏览一个文件,那么几乎可以保证您是受io限制的。使用大型缓冲区(~16K)和strstr()应该是您所需要的。如果文件是用ascii码编码的,搜索“gcagctgaaaaca”。如果它被编码成比特;只对可能接受的字符串进行排列(应该是~8;删除第一个字节),并使用memmem()和一个微小的重叠位检查。

I'll note here that glibc strstr and memmem already use Knuth-Morris-Pratt to search in linear time, so test that performance. It may surprise you.

我在这里要注意的是,glibc strstr和memmem已经使用Knuth-Morris-Pratt在线性时间内进行搜索,因此测试性能。它可能会让你大吃一惊。

#2


3  

If you first encode/compress the DNA string with a lossless coding method (e.g. Huffman, exponential Golumb, etc.) then you get a ranked probability table ("coding tree") for DNA tokens of various combinations of nucleotides (e.g., A, AA, CA, etc.).

如果您首先使用无损编码方法(如Huffman、指数Golumb等)对DNA字符串进行编码/压缩,那么您将得到一个排序的概率表(“编码树”),用于表示各种核苷酸组合(如a、AA、CA等)的DNA标记。

What this means is that, once you compress your DNA:

这意味着,一旦你压缩你的DNA

  1. You'll probably be using fewer bits to store GCAGCTGAAAACA and other subsequences, than the "unencoded" approach of always using two bits per base.
  2. 与“未编码”的方法相比,您可能使用更少的位来存储GCAGCTGAAAACA和其他子序列,这种方法总是在每个碱基上使用两个位。
  3. You can walk through the coding tree or table to build an encoded search string, which will usually be shorter than the unencoded search string.
  4. 您可以遍历编码树或表来构建一个编码的搜索字符串,它通常比未编码的搜索字符串要短。
  5. You can apply the same family of exact search algorithms (e.g. Boyer-Moore) to locate this shorter, encoded search string.
  6. 您可以应用相同的精确搜索算法(例如Boyer-Moore)来定位这个更短的编码搜索字符串。

As for a parallelized approach, split the encoded target string up into N chunks and run the search algorithm on each chunk, using the shortened, encoded search string. By keeping track of the bit offsets of each chunk, you should be able to generate match positions.

对于并行方法,将编码的目标字符串分割成N个块,并使用缩短的编码搜索字符串对每个块运行搜索算法。通过跟踪每个块的位偏移量,您应该能够生成匹配的位置。

Overall, this compression approach would be useful if you plan on doing millions of searches on sequence data that won't change. You'd be searching fewer bits — potentially many fewer, in aggregate.

总的来说,如果您计划对不会改变的序列数据进行数百万次搜索,那么这种压缩方法将非常有用。你会搜索更少的比特——可能更少。

#3


2  

Boyer-More is a technique used to search for substrings in plain strings. The basic idea is that if your substring is, say, 10 characters long, you can look at the character at position 9 in the string to search. If that character is not part of your search string, you could simply start the search after that character. (If that character is, indeed, in your string, the Boyer-More algorithm use a look-up table to skip the optimal number of characters forward.)

Boyer-More是一种在纯字符串中搜索子字符串的技术。基本思想是,如果你的子字符串是10个字符长,你可以查看字符串中位置9的字符来搜索。如果该字符不是搜索字符串的一部分,您可以在该字符之后启动搜索。(如果这个字符确实在您的字符串中,那么Boyer-More算法使用查找表来跳过最优字符数。)

It might be possible to reuse this idea for your packed representation of the genome string. After all, there are only 256 different bytes, so you could safely pre-calculate the skip-table.

也许可以将这个想法重新用于基因组字符串的打包表示。毕竟,只有256个不同的字节,所以您可以安全地预先计算skiptable。

#4


1  

The benefit of encoding the alphabet into bit fields is compactness: one byte holds the equivalent of four characters. This is similar to some of the optimizations Google performs searching for words.

将字母表编码到位域的好处是紧凑:一个字节包含相当于四个字符的字符。这类似于谷歌执行的一些搜索单词的优化。

This suggests four parallel executions, each with the (transformed) search string offset by one character (two bits). A quick-and-dirty approach might be to just look for the first or second byte of the search string and then check extra bytes before and after matching the rest of the string, masking off the ends if necessary. The first search is handily done by the x86 instruction scasb. Subsequent byte matches can build upon the register values with cmpb.

这表明有四个并行执行,每个都有一个(转换的)搜索字符串被一个字符(两个比特)偏移。一种快速而肮脏的方法可能是只查找搜索字符串的第一个或第二个字节,然后在匹配字符串的其余部分之前和之后检查额外的字节,必要时屏蔽端点。第一个搜索是由x86指令scasb轻松完成的。后续的字节匹配可以构建在cmpb的寄存器值之上。

#5


1  

You could create a state machine. In this topic, Fast algorithm to extract thousands of simple patterns out of large amounts of text , I used [f]lex to create the state machine for me. It would require some hackery to use the 4 letter ( := two bit) alphabet, but it can be done using the same tables as generated by [f]lex. (you could even create your own fgetc() like function which extracts two bits at a time from the input stream, and keeps the other six bits for consecutive calls. Pushback will be a bit harder, but not undoable).

您可以创建一个状态机。在本主题中,快速算法从大量文本中提取数千个简单模式,我使用[f]lex为我创建状态机。使用4个字母(:= 2位)的字母表需要一些技巧,但可以使用与[f]lex生成的表相同的表。(您甚至可以创建自己的类似fgetc()的函数,该函数每次从输入流中提取两个比特,并保存其他六个比特用于连续调用。反击会有点困难,但也不是不可能)。

BTW: I seriously doubt if there is any gain in compressing the data to two bits per nucleotide, but that is a different matter.

顺便说一句:我很怀疑把数据压缩到每一个核苷酸两个比特是否有任何好处,但那是另一回事。

#6


1  

Okay, given your parameters, the problem isn't that hard, just not one you'd approach like a traditional string search problem. It more resembles a database table-join problem, where the tables are much larger than RAM.

好的,给定参数,问题不是那么难,只是不像传统的字符串搜索那样。它更类似于数据库表连接问题,其中的表比RAM大得多。

  • select a good rolling hash function aka buzhash. If you have billions of strings, you're looking for a hash with 64-bit values.

    选择一个好的滚动哈希函数(buzhash)。如果您有数十亿个字符串,您正在寻找一个具有64位值的散列。

  • create a hash table based on each 127-element search string. The table in memory only needs to store (hash,string-id), not the whole strings.

    根据每个127个元素的搜索字符串创建一个哈希表。内存中的表只需要存储(哈希、字符串-id),而不需要存储整个字符串。

  • scan your very large target string, computing the rolling hash and looking up each value of the hash in your table. Whenever there's a match, write the (string-id, target-offset) pair to a stream, possibly a file.

    扫描非常大的目标字符串,计算滚动散列,并在表中查找散列的每个值。无论何时有匹配,将(string-id, target-offset)对写入流(可能是文件)。

  • reread your target string and the pair stream, loading search strings as needed to compare them against the target at each offset.

    重新读取目标字符串和对流,根据需要加载搜索字符串,以便在每个偏移量上与目标进行比较。

I am assuming that loading all pattern strings into memory at once is prohibitive. There are ways to segment the hash table into something that is larger than RAM but not a traditional random-access hash file; if you're interested, search for "hybrid hash" and "grace hash", which are more common in the database world.

我假设一次将所有模式字符串加载到内存中是禁止的。有一些方法可以将哈希表分割成比RAM大但不是传统的随机访问哈希文件的哈希表;如果您感兴趣,可以搜索“混合哈希”和“grace哈希”,这在数据库世界中更为常见。

I don't know if it's worth your while, but your pair stream gives you the perfect predictive input to manage a cache of pattern strings -- Belady's classic VM page replacement algorithm.

我不知道这样做是否值得,但您的结对流为您提供了管理模式字符串缓存的完美预测输入——Belady经典的VM页面替换算法。


推荐阅读
author-avatar
闻人特荷焦黄01965
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有