Apache Lucene - 全文搜索引擎库

项目简介

Apache Lucene是一个开源的全文搜索引擎库,完全用Java编写。它为开发者提供了简单易用的API,用于实现全文搜索功能。Lucene最初由Doug Cutting开发,后来成为Apache软件基金会的顶级项目。

Lucene不是一个完整的搜索应用程序,而是一个搜索引擎库,可以被嵌入到各种应用程序中。许多知名的搜索产品都基于Lucene构建,如Apache Solr、Elasticsearch等。

主要特性

  • 高性能:快速的文本索引和搜索
  • 可扩展:支持大规模文档集合
  • 多语言支持:内置多种语言的分析器
  • 灵活的查询:支持复杂的搜索查询
  • 评分机制:相关性评分和排序
  • 增量更新:支持实时索引更新

项目原理

核心概念

Document(文档)

  • Lucene中的基本信息单位
  • 包含一个或多个Field
  • 类似于数据库中的记录

Field(字段)

  • 文档中的一个命名属性
  • 包含字段名和字段值
  • 可以设置是否索引、存储、分析等属性

Term(词项)

  • 索引中的基本单位
  • 由字段名和词项值组成
  • 如:title:apache, content:search

Index(索引)

  • 包含所有文档的数据结构
  • 提供快速的文档检索能力
  • 存储在文件系统或内存中

索引过程

1
原始文档 → 分析器处理 → 创建Term → 构建倒排索引 → 存储索引文件

搜索过程

1
查询字符串 → 查询解析 → 索引查找 → 评分排序 → 返回结果

使用场景

1. 网站搜索

为网站提供全文搜索功能,如文章搜索、产品搜索等。

2. 企业搜索

构建企业内部的文档搜索系统,检索邮件、文档、知识库等。

3. 日志分析

搜索和分析日志文件,快速定位问题和异常。

4. 内容推荐

基于内容相似性进行推荐,如相关文章推荐。

5. 数据挖掘

从大量文本数据中提取有价值的信息。

具体案例

案例1:基本索引和搜索

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.*;
import org.apache.lucene.index.*;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.*;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;

public class LuceneExample {
private Directory directory;
private StandardAnalyzer analyzer;

public LuceneExample() {
directory = new RAMDirectory();
analyzer = new StandardAnalyzer();
}

// 创建索引
public void createIndex() throws Exception {
IndexWriterConfig config = new IndexWriterConfig(analyzer);
IndexWriter writer = new IndexWriter(directory, config);

// 添加文档
addDocument(writer, "1", "Apache Lucene搜索引擎", "Lucene是一个高性能的全文搜索引擎库");
addDocument(writer, "2", "Apache Solr搜索平台", "Solr是基于Lucene构建的企业级搜索平台");
addDocument(writer, "3", "Elasticsearch分布式搜索", "Elasticsearch是分布式的RESTful搜索引擎");
addDocument(writer, "4", "Java编程语言", "Java是面向对象的编程语言");

writer.close();
System.out.println("索引创建完成");
}

private void addDocument(IndexWriter writer, String id, String title, String content) throws Exception {
Document doc = new Document();

// 存储字段(可检索可存储)
doc.add(new StringField("id", id, Field.Store.YES));
doc.add(new TextField("title", title, Field.Store.YES));
doc.add(new TextField("content", content, Field.Store.YES));

// 数值字段
doc.add(new IntPoint("length", content.length()));
doc.add(new StoredField("length", content.length()));

// 排序字段
doc.add(new SortedDocValuesField("title_sort", new BytesRef(title)));

writer.addDocument(doc);
}

// 搜索文档
public void search(String queryString) throws Exception {
IndexReader reader = DirectoryReader.open(directory);
IndexSearcher searcher = new IndexSearcher(reader);

// 创建查询解析器
QueryParser parser = new QueryParser("content", analyzer);
Query query = parser.parse(queryString);

System.out.println("查询: " + query.toString());

// 执行搜索
TopDocs topDocs = searcher.search(query, 10);

System.out.println("找到 " + topDocs.totalHits + " 个结果:");

for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
Document doc = searcher.doc(scoreDoc.doc);
float score = scoreDoc.score;

System.out.printf("评分: %.3f, ID: %s, 标题: %s%n",
score, doc.get("id"), doc.get("title"));
}

reader.close();
}

// 复杂查询示例
public void complexSearch() throws Exception {
IndexReader reader = DirectoryReader.open(directory);
IndexSearcher searcher = new IndexSearcher(reader);

// 布尔查询
BooleanQuery.Builder booleanQuery = new BooleanQuery.Builder();

// 必须包含
TermQuery mustQuery = new TermQuery(new Term("content", "搜索"));
booleanQuery.add(mustQuery, BooleanClause.Occur.MUST);

// 应该包含(可选)
TermQuery shouldQuery = new TermQuery(new Term("title", "Apache"));
booleanQuery.add(shouldQuery, BooleanClause.Occur.SHOULD);

// 不能包含
TermQuery mustNotQuery = new TermQuery(new Term("content", "Java"));
booleanQuery.add(mustNotQuery, BooleanClause.Occur.MUST_NOT);

Query query = booleanQuery.build();

// 排序
Sort sort = new Sort(new SortField("title_sort", SortField.Type.STRING));

TopDocs topDocs = searcher.search(query, 10, sort);

System.out.println("复杂查询结果:");
for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
Document doc = searcher.doc(scoreDoc.doc);
System.out.println("标题: " + doc.get("title"));
}

reader.close();
}

public static void main(String[] args) throws Exception {
LuceneExample example = new LuceneExample();

// 创建索引
example.createIndex();

// 简单搜索
example.search("搜索引擎");

System.out.println("\n" + "=".repeat(50) + "\n");

// 复杂搜索
example.complexSearch();
}
}

案例2:自定义分析器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
import org.apache.lucene.analysis.*;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.CharArraySet;

public class CustomAnalyzer extends Analyzer {
private final CharArraySet stopWords;

public CustomAnalyzer() {
// 自定义停用词
stopWords = new CharArraySet(Arrays.asList(
"的", "了", "在", "是", "我", "有", "和", "就",
"不", "人", "都", "一", "一个", "上", "也", "很"
), true);
}

@Override
protected TokenStreamComponents createComponents(String fieldName) {
// 标准分词器
StandardTokenizer tokenizer = new StandardTokenizer();

// 过滤器链
TokenStream tokenStream = tokenizer;

// 转小写
tokenStream = new LowerCaseFilter(tokenStream);

// 停用词过滤
tokenStream = new StopFilter(tokenStream, stopWords);

// 自定义过滤器
tokenStream = new CustomFilter(tokenStream);

return new TokenStreamComponents(tokenizer, tokenStream);
}

// 自定义过滤器
private static class CustomFilter extends TokenFilter {
private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);

protected CustomFilter(TokenStream input) {
super(input);
}

@Override
public boolean incrementToken() throws IOException {
if (input.incrementToken()) {
String term = termAtt.toString();

// 自定义处理逻辑,如同义词替换
if ("搜索".equals(term)) {
termAtt.setEmpty().append("检索");
}

return true;
}
return false;
}
}

// 测试分析器
public static void testAnalyzer() throws Exception {
CustomAnalyzer analyzer = new CustomAnalyzer();
String text = "Apache Lucene是一个高性能的全文搜索引擎库";

TokenStream tokenStream = analyzer.tokenStream("content", text);
CharTermAttribute termAtt = tokenStream.addAttribute(CharTermAttribute.class);

tokenStream.reset();
System.out.println("分析结果:");
while (tokenStream.incrementToken()) {
System.out.println(termAtt.toString());
}
tokenStream.close();
analyzer.close();
}
}

案例3:高亮显示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
import org.apache.lucene.search.highlight.*;
import org.apache.lucene.search.highlight.Formatter;

public class SearchHighlighter {

public void searchWithHighlight(String queryString) throws Exception {
IndexReader reader = DirectoryReader.open(directory);
IndexSearcher searcher = new IndexSearcher(reader);

QueryParser parser = new QueryParser("content", analyzer);
Query query = parser.parse(queryString);

TopDocs topDocs = searcher.search(query, 10);

// 创建高亮器
Formatter formatter = new SimpleHTMLFormatter("<b>", "</b>");
QueryScorer scorer = new QueryScorer(query);
Highlighter highlighter = new Highlighter(formatter, scorer);
Fragmenter fragmenter = new SimpleSpanFragmenter(scorer, 100);
highlighter.setTextFragmenter(fragmenter);

System.out.println("高亮搜索结果:");

for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
Document doc = searcher.doc(scoreDoc.doc);
String content = doc.get("content");

// 生成高亮片段
String[] fragments = highlighter.getBestFragments(analyzer, "content", content, 3);

System.out.println("标题: " + doc.get("title"));
for (String fragment : fragments) {
System.out.println("摘要: " + fragment);
}
System.out.println();
}

reader.close();
}
}

案例4:实时搜索(Near Real-time Search)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
public class NearRealTimeSearch {
private IndexWriter writer;
private SearcherManager searcherManager;
private ControlledRealTimeReopenThread reopenThread;

public void initialize() throws Exception {
Directory directory = FSDirectory.open(Paths.get("index"));
StandardAnalyzer analyzer = new StandardAnalyzer();

IndexWriterConfig config = new IndexWriterConfig(analyzer);
writer = new IndexWriter(directory, config);

// 创建实时搜索管理器
searcherManager = new SearcherManager(writer, new SearcherFactory());

// 启动重新打开线程
reopenThread = new ControlledRealTimeReopenThread(writer, searcherManager, 1.0, 0.1);
reopenThread.setName("NRT Reopen Thread");
reopenThread.setDaemon(true);
reopenThread.start();
}

public void addDocument(String id, String title, String content) throws Exception {
Document doc = new Document();
doc.add(new StringField("id", id, Field.Store.YES));
doc.add(new TextField("title", title, Field.Store.YES));
doc.add(new TextField("content", content, Field.Store.YES));

writer.addDocument(doc);

// 通知重新打开线程
reopenThread.waitForGeneration(writer.updateDocument(new Term("id", id), doc));
}

public void search(String queryString) throws Exception {
IndexSearcher searcher = searcherManager.acquire();
try {
QueryParser parser = new QueryParser("content", new StandardAnalyzer());
Query query = parser.parse(queryString);

TopDocs topDocs = searcher.search(query, 10);

System.out.println("实时搜索结果 (" + topDocs.totalHits + " 个结果):");
for (ScoreDoc scoreDoc : topDocs.scoreDocs) {
Document doc = searcher.doc(scoreDoc.doc);
System.out.println("ID: " + doc.get("id") + ", 标题: " + doc.get("title"));
}
} finally {
searcherManager.release(searcher);
}
}

public void close() throws Exception {
reopenThread.interrupt();
reopenThread.close();
searcherManager.close();
writer.close();
}
}

性能优化建议

1. 索引优化

1
2
3
4
5
6
7
8
9
10
// 批量索引优化
IndexWriterConfig config = new IndexWriterConfig(analyzer);
config.setRAMBufferSizeMB(256); // 设置RAM缓冲区大小
config.setMaxBufferedDocs(1000); // 设置最大缓冲文档数

// 合并策略
TieredMergePolicy mergePolicy = new TieredMergePolicy();
mergePolicy.setMaxMergeAtOnce(10);
mergePolicy.setSegmentsPerTier(10);
config.setMergePolicy(mergePolicy);

2. 搜索优化

1
2
3
4
5
6
7
8
9
10
11
12
// 查询缓存
QueryResultCache queryCache = new LRUQueryCache(100, 10_000_000);
searcher.setQueryCache(queryCache);

// 预加载字段
StoredFieldVisitor fieldSelector = new StoredFieldVisitor() {
@Override
public Status needsField(FieldInfo fieldInfo) {
return "id".equals(fieldInfo.name) || "title".equals(fieldInfo.name)
? Status.YES : Status.NO;
}
};

3. 内存管理

1
2
3
4
5
6
// JVM参数优化
-Xms4g -Xmx4g
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-XX:+UnlockExperimentalVMOptions
-XX:+UseStringDeduplication

Apache Lucene作为搜索引擎的核心库,其强大的全文检索能力和灵活的架构设计使其成为构建各种搜索应用的理想选择。通过合理的配置和优化,Lucene可以为应用程序提供快速、准确的搜索功能。

版权所有,如有侵权请联系我