《電子技術(shù)應(yīng)用》
您所在的位置:首頁 > 人工智能 > 设计应用 > 基于多头卷积残差连接的文本数据实体识别
基于多头卷积残差连接的文本数据实体识别
网络安全与数据治理
刘微,李波,杨思瑶
沈阳理工大学信息科学与工程学院
摘要: 为构建工作报告中的文本数据关系型数据库,针对非结构化文本数据中有效信息实体提取问题以及传统网络在提取信息时特征丢失问题,设计了一种基于深度学习的实体识别模型RoBERTa-MCR-BiGRU-CRF,首先利用预训练模型RoBERTa作为编码器,将训练后的词向量输入到多头卷积残差网络层MCR扩充语义信息,接着输入到门控循环BiGRU层进一步提取上下文特征,最后经过条件随机场CRF层解码进行标签判别。经过实验,模型在工作报告数据集上F1值达到96.64%,优于其他对比模型;并且在数据名称实体类别上,F1值分别比BERT-BiLSTM-CRF和RoBERTa-BiGRU-CRF提高了3.18%、2.87%,结果表明该模型能较好地提取非结构化文本中的有效信息。
中圖分類號(hào):TP391.1文獻(xiàn)標(biāo)識(shí)碼:ADOI:10.19358/j.issn.2097-1788.2024.12.008
引用格式:劉微,李波,楊思瑤. 基于多頭卷積殘差連接的文本數(shù)據(jù)實(shí)體識(shí)別[J].網(wǎng)絡(luò)安全與數(shù)據(jù)治理,2024,43(12):54-59.
Text data entity recognition based on muti-head convolution residual connections
Liu Wei, Li Bo, Yang Siyao
School of Information Science and Engineering, Shenyang University of Technology
Abstract: To construct a relational database for text data in work reports, and address the problem of extracting useful information entities from unstructured text and feature loss in traditional networks during information extraction, a deep learning-based entity recognition model, which is named RoBERTa-MCR-BiGRU-CRF is proposed. The model firstly uses the pre-trained model Robustly Optimized BERT Pretraining Approach (RoBERTa) as an encoder, feeding the trained word embeddings into the Multi-head Convolutional Residual network (MCR) layer to enrich semantic information. Next, the embeddings are input into a gated recurrent Bidirectional Gated Recurrent Unit (BiGRU) layer to further capture contextual features. Finally, a Conditional Random Field (CRF) layer is used for decoding and label prediction. Experimental results show that the model achieves an F1 score of 96.64% on the work report dataset, outperforming other comparative models. Additionally, for named entity categories in the data, the F1 score is 3.18% and 2.87% higher than BERT-BiLSTM-CRF and RoBERTa-BiGRU-CRF, respectively. The results demonstrate the model′s effectiveness in extracting useful information from unstructured text.
Key words : deep learning; named entity recognition; neural networks; data mining

引言

實(shí)體識(shí)別在信息抽取方面有著重要作用,現(xiàn)階段數(shù)據(jù)提取主要是利用深度學(xué)習(xí)技術(shù),運(yùn)用到命名實(shí)體識(shí)別(Named Entity Recognition,NER)中提取名詞和一些相關(guān)概念。命名實(shí)體識(shí)別可以提取有效數(shù)據(jù),去除無關(guān)信息,方便建立數(shù)據(jù)庫,對(duì)數(shù)據(jù)進(jìn)行后續(xù)處理與追蹤從而提升其安全性,可以應(yīng)用于構(gòu)建知識(shí)圖譜問答系統(tǒng)和數(shù)據(jù)追溯系統(tǒng)等領(lǐng)域。實(shí)體識(shí)別本質(zhì)上是解決一個(gè)序列標(biāo)注問題,對(duì)文本和數(shù)字序列進(jìn)行標(biāo)簽分類。

隨著深度學(xué)習(xí)技術(shù)的發(fā)展,實(shí)體識(shí)別取得了顯著進(jìn)展,傳統(tǒng)的基于規(guī)則和詞典的方法逐漸被基于統(tǒng)計(jì)學(xué)習(xí)和神經(jīng)網(wǎng)絡(luò)的方法所取代,自2018年以來,基于BERT的預(yù)訓(xùn)練神經(jīng)網(wǎng)絡(luò)模型(如BERT-BiLSTM-CRF)在多個(gè)公開數(shù)據(jù)集上達(dá)到了同年的最好性能。本文提出一種新的融合外部知識(shí)資源的方法來提高NER模型的性能。本模型在自制的數(shù)據(jù)集上進(jìn)行實(shí)驗(yàn),驗(yàn)證了所提方法在非結(jié)構(gòu)文本數(shù)據(jù)方面識(shí)別的性能,證明模型在NER任務(wù)中的有效性。


本文詳細(xì)內(nèi)容請(qǐng)下載:

http://m.ihrv.cn/resource/share/2000006267


作者信息:

劉微,李波,楊思瑤

(沈陽理工大學(xué)信息科學(xué)與工程學(xué)院,遼寧沈陽110158)


Magazine.Subscription.jpg

此內(nèi)容為AET網(wǎng)站原創(chuàng),未經(jīng)授權(quán)禁止轉(zhuǎn)載。