Latest Posts

30

May 24

Partex and Singapore’s Experimental Drug Development Centre collaborate to bring forward an innovative approach for early drug discovery and development

Frankfurt, Germany, 3rd June 2024, 9am CET Partex, a leading provider of AI-driven solutions in the pharmaceutical industry, is thrilled...
Read More

22

Apr 24

Partex Partners with Lupin to Revolutionize Drug Discovery through AI-Driven Asset Search and Evaluation

Frankfurt, Germany, 23 April 2024 – Partex, a leading provider of AI-driven solutions in the pharmaceutical industry, is thrilled to...
Read More

Innoplexus wins Horizon Interactive Gold Award for Curia App

Read More
Try Ontosight® For Free

HOW WE WORK

OUR APPROACH

Engagement Model

Platform licensing /
Subscription model

This usually involves data resolution and integration from public, enterprise, and third-party sources as well as dashboard setup/customization. The total cost is split into ‘platform development/set-up fee’ (depending on scope/customization required; payable before set-up or also as installments until platform delivery) and ‘licensing fee’ (depending on IT infrastructure/number of users/duration of license) payable at the start of the licensing phase.

D3 ENGINE

Professional services /

Risk sharing model

This is applicable for the different virtual drug discovery and development modules/projects such as indication prioritization for an asset or clinical trial protocol design optimization & success prediction for a trial. The total cost is split into ‘project kick-off fee’ (payable upon project start) and rest as ‘milestone payments’ payable after completion of various project milestones/delivery of services/results, over the course of the project.

Engagement Process:

Collaborative Approach

Innoplexus team works together with its customers to ensure optimal integration with customer processes. Before starting any new project, a scoping workshop is held with the customer to outline their major requirements and define the exact project deliverables. As a next step, a proposal document containing the project background/context, approach, deliverables, project governance, timeline and cost is submitted for customer review and approval. Upon proposal acceptance by the customer, a Master Service Agreement (MSA) and/or a Statement of Work (SoW) document is signed between the parties. As soon as the Project Order (PO) is received by Innoplexus, the project kick-off and project delivery is planned and executed as agreed.

Iterative way of working

Innoplexus teams follow the agile methodology. As a first step, a minimal viable product is delivered, which is incrementally built-up, based on advanced requirements and customer feedback incorporation at regular intervals as part of milestone/project delivery meetings, thereby, improving the desired solution/end-product.

Frequent Touchpoints

Innoplexus and customer teams connect regularly to ensure alignment on goals and monitor project progress. This is ensured with the help of bi-weekly meetings for regular projects (4-6 weeks duration) and in addition, monthly steering committee meetings for bigger, complex projects (26-30 weeks duration).

Quality Assurance

Data Ingestion & Collation

Fully automated with AI algorithms and verified monthly

Hypotheses Generation

Largely automated with AI algorithms and sanity tested by experts

Data Ingestion & Collation

It’s a critical process as it ensures that the data being analyzed is accurate, complete, and consistent for you. This enables the team to make informed decisions based on reliable data and helps to identify any errors or inconsistencies that need to be addressed and clean out the junk data.We follow the world’s best practices for data ingestion and collation which include defining clear data standards, establishing data quality metrics, implementing data validation checks by our inhouse UDEGO process. We use automated tools and processes to streamline the data integration and validation process. We have processes in place to highlight data issues proactively, which is executed under the guidance of our Chief Medical Officer (CMO).It is also important to have a robust data governance framework.We have 95% Confidence on our data which is checked on a Monthly basis as we crawl 50k sources.

Hypotheses Generation

Largely automated with AI algorithms and sanity tested by experts

Hypotheses Generation

It is an important step in the scientific method and is commonly used in data analysis and research.In the context of quality assurance, hypothesis generation involves formulating hypotheses about potential problems or issues that may be affecting the quality of a product or process. This can be based on previous experience, data analysis, or other sources of information.

The process of hypothesis generation typically involves the following steps:
  1. Identifying the problem or issue: This involves clearly defining the problem or issue that needs to be addressed.
  2. Gathering data: Data is gathered from various sources, including internal records, customer feedback, and market research.
  3. Analyzing the data: The data is analyzed to identify patterns, trends, and potential causes of the problem or issue.
  4. Formulating a hypothesis: Based on the analysis of the data, one or more hypotheses are developed to explain the problem or issue.
  5. Testing the hypothesis: The hypothesis is tested through further data analysis or experimentation to determine if it is supported by the evidence.
  6. Refining the hypothesis: If the hypothesis is not supported by the evidence, it may need to be refined or revised based on new information.
By following this process, quality assurance teams can develop hypotheses that help to identify and address potential quality issues, leading to improved product quality and customer satisfaction.

Intuitive Interfaces

Industry standard functional testing as per SDLC process

Final Outcome / Deliverable

Largely manually validated by medical experts under the guidance of CMO/CSO

Intuitive Interface

For you, intuitive interfaces are critical for ensuring that the software or product being developed is easy to use and meets the needs of the end-users. The Quality Assurance team is responsible for testing the user interface and ensuring that it meets the organization’s standards for usability and accessibility.Our automated testing process covers 90% of our Products to proactively avoid errors.

To provide you with good quality data and products, the QA team always does Manual Testing to assure the functionality working properly, API Testing for validation of API and its data, Application Security Testing to maintain the security of data & privacy, Performance Testing to ensure the scalibilty of application at the peak usage. We are always open to Your feedback for usability and Acceptance testing.By focusing on intuitive interfaces, the QA team can help to ensure that the software or product being developed is user-friendly, accessible, scalable, secure and meets your needs . Our Vision is to to increase user satisfaction, improved user adoption, and ultimately, increased business success.

Final Outcome / Deliverable

Largely manually validated by medical experts under the guidance of CMO/CSO

Final Outcome / Deliverable

In the context of a largely manually validated process by medical experts under the guidance of a Chief Medical Officer (CMO) or Chief Scientific Officer (CSO) is to ensure the accuracy, completeness, and consistency of the validated data for you.The QA team is responsible for ensuring that the validated data meets the organization’s standards for quality and is free from errors or inconsistencies. This involves reviewing the data and ensuring that it meets the predefined criteria for accuracy, completeness, and consistency.Some of the key considerations for the QA team when validating data in this context include:
  1. Accuracy: The QA team should ensure that the data is accurate and free from errors, by reviewing the data for inconsistencies or errors.
  2. Completeness: The QA team should ensure that the data is complete, by verifying that all relevant data has been collected and included in the final output.
  3. Consistency: The QA team should ensure that the data is consistent, by reviewing the data for any discrepancies or inconsistencies.
  4. Documentation: The QA team should maintain detailed documentation of the validation process, including any issues or concerns that were identified and how they were addressed.
  5. Compliance: The QA team should ensure that the validation process is compliant with all relevant regulations and standards.By focusing on these key areas, the QA team can help to ensure that the validated data is of high quality and can be used with confidence by the CMO or CSO for their decision-making processes. This can help to improve patient outcomes and contribute to the overall success of the organization.

Our Technology

Machine Learning & AI

Tapping the wealth of unstructured data from internal and external sources. Understanding domain-specific contextual information (e.g., our life sciences language processing™ engine). Building reasoning systems to serve the intent of user queries.

Ontology – Life-Science-Language-Processing

Mapping all discoverable concepts from content of all major data sources in the base ontology. Connecting observations from curated sources and literature. Self-learning unseen concepts validated by random checks.

Computer Vision

Leverages image processing to classify and extract relevant info from PDF and image files. Enhanced OCR to handle ambiguous and special characters with higher precision. Understands page layout and structure in the same way as humans do.
Our comprehensive patent portfolio

Blockchain

Smart contract system for searching unpublished data and making transactions. Real-time valuation of unpublished data through AI. TruAgent™ enables integration of confidential data in a secured way.

Network Analysis

Modeling and persisting (storing) the entire data set as a network in a graph database. Multigraph with networks from different asset classes as layers. Large scale network analysis to find key insights in real time.

Entity Normalization

Resolving entities from disparate sources covering name variations and degeneration. Increasing the precision to discover entities even with sparse metadata. Leveraging crawled data to improve normalization.

Security

We follow the highest standards of security on a global scale.

ISO 27001

certified

GDPR

compliant
Information security norms validated by > 5 large pharma and several CROs and Biotech
Access in Proxy Server maintained via public / private key infrastructure
Proxy Server-based Database access (even to applications)
Private IPs never disclosed to outside world
Virtual Private Clouds for restricted instance (no peer access)
Two Stage Authentication
No Remote Database Access (all ports blocked)
Data in database encrypted, decryption occur only in run-time applications