1 min readfrom Machine Learning

What kinds of models are people training with document data? [P]

We've helped some folks with synthetic data for a number of different projects and some of them for "document data". Like annotated PDFs, PNGs. Tax forms, health forms. Especially things with PII that are hard to get because of obvious privacy concerns. So, we came up with an engine to build a simulation and then extract the data from that simulation.

We're trying to make sure our pipeline fits into a normal training pipeline, so I'm curious about your workflows or training pipelines. Today we output in formats consistent with FUNSD, BIO, YOLO (like v5 and higher), Donut, COCO, etc. Are we shooting for the right stuff, or are people training for something different that could use a different format or ontology or something?

Other things we're trying to figure out are like is a PyPi SDK package useful, do people just use the API and not care, shut up and give me a zip file? :-)

submitted by /u/bgeisel1
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#generative AI for data analysis
#Excel alternatives for data analysis
#financial modeling with spreadsheets
#natural language processing for spreadsheets
#big data management in spreadsheets
#conversational data analysis
#real-time data collaboration
#intelligent data visualization
#data visualization tools
#enterprise data management
#big data performance
#data analysis tools
#data cleaning solutions
#rows.com
#automation in spreadsheet workflows
#spreadsheet API integration
#synthetic data
#document data
#training pipeline
#PII