实现方法:利用OpenAI GPT model的自然措辞理解与剖析能力,将简历中信息提取为标准的yaml格式(自设模板),工具集采取Langchain。
措辞:Python,终极输出两种格式文件,txt(yaml格式内容)和excel文件。
步骤:
安装必要库:pip install langchain langchain_openai pyyaml pandas
加载必要库:
from langchain_community.document_loaders import PyPDFLoaderfrom langchain_openai import ChatOpenAIfrom langchain.memory import ConversationBufferMemoryfrom langchain import LLMChain, PromptTemplateimport yamlimport pandas as pd
OpenAI配置,简历属于本身内容较规范文档,gpt-3.5够用:
OPENAI_API_KEY = "sk-" # 修正为自己的APImodel = 39;gpt-3.5-turbo-0125'
yaml模板,包含了一条指令,及简历所涉及的所有内容分类:
template = """Format the provided resume to this YAML template(don't add any yaml mark): --- name: '' gender: '' age: '' phoneNumbers: - '' emails: - '' websites: - '' dateOfBirth: '' address: '' JobOrientations: - '' personal evaluation: '' education: - school: '' degree: '' fieldOfStudy: '' startDate_e: '' endDate_e: '' workExperience: - company: '' position: '' startDate_w: '' endDate_w: '' description: '' skills: - skill: '' certifications: - certification: '' {human_input}"""
配置prompt模板和LLM Chain,temperature可以设置低些:
prompt = PromptTemplate( input_variables=["human_input"], template=template )# 不用多轮对话的话清空memory = ConversationBufferMemory(memory_key="")llm_chain = LLMChain( llm=ChatOpenAI(model=model, openai_api_key=OPENAI_API_KEY, temperature=0.5), prompt=prompt, verbose=True, memory=memory, )
PDF内容提取,利用Langchain的PDFLoader工具:
def extract_text_from_pdf(pdf_path): loader = PyPDFLoader(pdf_path) pages = loader.load() return pages
将directory目录下所有PDF文件进行提取,并构造化输出:
directory = '/xxx' # 更换为你的目录路径for filename in os.listdir(directory): if filename.endswith('.pdf'): pdf_path = os.path.join(directory, filename) txt_path = os.path.join(directory, os.path.splitext(filename)[0] + '_s.yaml') xls_path = os.path.join(directory, os.path.splitext(filename)[0] + '.xlsx') # 从PDF提取数据并构造化 text = extract_text_from_pdf(pdf_path) text_structual = llm_chain.predict(human_input=text) # 将构造化数据写入到TXT文件 with open(txt_path, 'w') as f: f.write(text_structual) # 将YAML数据转换为Python字典 data_dict = yaml.safe_load(text_structual) # 如果项不是列表,则转换为列表 for key, value in data_dict.items(): if not isinstance(value, list): data_dict[key] = [value] #print(data_dict) # 将字典中的每个列表转换为一个DataFrame,然后将所有的DataFrame按行合并 dfs = [] for key, value in data_dict.items(): if all(isinstance(i, dict) for i in value): df = pd.DataFrame(value) else: df = pd.DataFrame({key: value}) dfs.append(df) df = pd.concat(dfs, axis=1) # 将 DataFrame 写入 Excel 文件 df.to_excel(xls_path, index=False)
实行代码,提取yaml格式示例(内容已脱敏):
添加图片注释,不超过 140 字(可选)
更严谨的方法是可以用Function Call,或者利用Langchain自身的构造化parser工具,但是配置模板时每一项内容都要标注对应的description,有点啰嗦,不再赘述。