Transcript
Thank you all for joining today. The subject of today's webcast is going to be exploring the AMP guidelines with VSClinical and what's become available with our most recent software release. We’re going to focus on the value of having a tool that allows for quick, consistent, comprehensive, somatic variant interpretation. Also, pair that with standardized clinical reporting. Before jumping into the project, I just wanted to give new attendees to this webcast some background on our company.
First and foremost, we recently received grant funding from the NIH which we're incredibly grateful for. The research reported in this publication was supported by the National Institute of General Medical Sciences of the National Institute of Health under these listed awards. Additionally, we're also grateful for receiving some local grant funding from the state of Montana. Our PI here is Andreas Scherer, Ph.D., CEO of Golden Helix, and the content described today is really the sole responsibility of the authors and does not officially represent the view of the NIH. So again, we're thankful for grants like these. It provides huge momentum in developing the quality software that we want to give our customers. So, let's learn a little bit more about Golden Helix and who we are as a company.
Golden Helix is a global bioinformatics software and analytics company that is working to enable researchers and clinical practices to analyze large genomic data sets. We were originally founded in 1998 based on pharmacogenomic work performed at GlaxoSmithKline who's actually still a key investor with our company today.
We currently have two flagship products - VarSeq and SNP & Variation Suite (or SVS for short). VarSeq really serves as a clinical tertiary analysis tool available for variant annotation and filtration processing. But, additionally, users have access to automated AMP guidelines or ACMG guidelines as well. VarSeq also has the capability to detect copy number variations scaling from single exome up to these large aneuploidy events as well. Additionally, the finalization of that variant interpretation and classification is further optimized with the clinical reporting capability that's in the software and users can essentially integrate all of these features into a standardized workflow which can be automated further, like running batch scripts for example with VSPipeline . And paired with VarSeq is VSWarehouse which serves as a repository for this large amount of useful genomic data. VSWarehouse not only solves the issue of basic data storage for all of this increasing genomic content but is also fully queryable - you can go in and audit these variants. Basically, it's a great repository where you can say, “I’ve had a variant that might have had a stale classification previously, but I want to be updated whenever there's new evidence that comes down the road…” It's fully definable for the access for any kind of users that are actually project managers or simple collaborators who are getting in just to look at those results. And then lastly, SVS, which serves as a research platform for population statistics, for example. It really handles a lot of these complex analysis for genomic and phenotypic data including workflow processes like GWAS, Genomic Prediction RNA-Seq analysis, as well as CNVs.
Over the years our software has become very well received by the industry. We've actually been sighted and 1,000s of peer review publications. This is just a testament to our customer base. This includes reputable journals like Science, Nature, Nature Genetics, for example. And another testament to the customer base is just the customers themselves. We've actually been implemented in over 400 organizations around the world. This includes top-tier institutions, government organizations, clinics, genetic testing labs… Now well over 20,000 installs with 1,000s of unique users.
The question that we're framing here is, “Why is all of this relevant to you?” Over the course of those 20 years, we've actually gotten a lot of user feedback which we want to immediately incorporate into developing and releasing the newer versions of the software that stays relevant to those customers’ needs. So, that in addition to those research grants that we have, really allows us to factor in for all that user feedback and try to stay aware of the industry needs so that we can keep up to date with our software. Additionally, we always want to stay relevant in the community as well by regularly attending conferences as well as providing a lot of good product information through eBooks , tutorials , blog posts and of course, like today, webcasts as well. Now your access to the software is based on a simple subscription model where you will not get charged per-sample, nor will you get charged per-version. With that license subscription model you also have full access to support. So, I myself or other FAS is on staff, we're always available to help answer any of your questions, whether it's an email to conversation that we go to settle or hopping on a web call to do some more hands-on training. We're always available for that.
In the Golden Helix stack, here is basically a general view of everything we provide. You have the capability to essentially start with a FASTQ file and get all the way down to clinical report. This is achievable through our partnership with Sentieon who provides the solutions for that alignment in variant calling procedure to produce those VCF and BAM files. Now, those files serve as the basis for import in the VarSeq software, but also for that CNV detection, that tertiary analysis in VarSeq as well. So, if you are performing NGS-based CNV analysis, Golden Helix is actually market leader here supported by studies like Robarts Research Institute showing 100% concordance with MLPA methods , for example. Additionally, the imported variants that go into your VarSeq project can be run through VSClinicals, ACMG & AMP guidelines. And then after completing all of that secondary and tertiary processing, all the analysis can be rendered into a clinical report within seconds. All that content can then be stored into VSWarehouse providing researchers and clinicians with access to this information and review those previous findings.
Now, of course the focus today is we're going to be going through the AMP guidelines which are embedded in the VarSeq software. VarSeq is a powerful, flexible, and scalable variant annotation filtering and interpretation engine. And so, this commercial grade software is designed to be a local desktop application that's installed on your computer. And the nice thing about it is you're not simply just looking at a list of variants in a spreadsheet format. VarSeq supplies a lot of good, rich visualization capabilities with tools like GenomeBrowse embedded in the software. So, it's always really nice to go in and visually see kind of what's going on in the neighborhood of the variant that you're looking at. But one of the more powerful capabilities the software gives us - once you go through and you design a workflow, you can set up a filter chain to prioritize the variants that you want to kind of filter down to that are clinically relevant. You load up all the annotations that are desirable to kind of stack against as evidence for those variants and various other algorithms that we have in the software that includes those ACMG & AMP guidelines criteria collection. Basically, all of that can be set up into a standardized workflow so that the next time you go in you all you have to do is import data. You don't have to worry about building that's project from scratch every time you open up the software.
And in regard to project templates, we actually ship some with the software so you could always use this as a starting guide. Whether you're looking at trio analysis, gene panels, or even those that are more specific to the cancer kind of guideline basis, against gene panels or tumor normal analysis, for example. These templates are you know, fully customizable - you can modify them however you'd like to… like I said, they just serve as a starting point to get you familiar with how that workflow could look if you want to customize it to your own standards. Beyond that, we also have example projects that utilize these templates but also show a list of variants that kind of go through that workflow so you can investigate what that kind of procedure looks like.
Building up this workflow, you want to leverage as much of the evidence for the variant as possible. And so that's why we wanted to supply a long list of these annotations that you can incorporate into your project. So, you know with VarSeq being a filtration and annotation engine we seek to provide the best quality and comprehensive list of databases for your analysis. So, a lot of our public annotations are hosted on our public server and basically you can go through and get access to these quality curated databases and incorporate them into your workflow. Additionally, depending on the license package that you get, there's premium annotations that you also have access to which could be the framework of you getting the clinical stack + VSReports , or even the full VSClinical stack which we're going to explore today with that interpretation hub. These annotations are versioned for their updated release and we’ll notify you if there's a new version available. And we don't directly integrate those updates automatically. We want to make the users aware of them so they can make the selection to integrate them themselves.
And if there's any public databases that you would like to see in the future always reach out to us were very receptive to this feature request. But additionally, we have this convert wizard tool so that if you have any kind of private data or any other data annotation tracks that you'd like to see in your workflow that isn't inherently there or available, you can always use that convert wizard to get it into the software yourself. Now regarding the AMP guidelines, in the somatic processing of these variants which is going to be the focus for today, VSClinical automates the integration of a large list of these databases, which we will break down into more detail here shortly.
But first, what is the ultimate goal in utilizing the AMP guidelines? It's not only to create a full understanding of a variant of biomarker impact but also have the ability to investigate and report on the variety of biomarker types. So, this could include single nucleotide variants insertions/deletions, copy number variants, gene fusions, and those considerations for wild-type chains as well. And the fundamental goal here is to account for these different biomarker types to not only store their classifications and interpretations, but also supply those treatment options for the patients much like the image seen on the right where you're looking at the impact of vemurafenib in a patient that has this BRAF V600E mutation. So, all of that drug sensitivity and resistance prognostic and diagnostic information will ultimately determine the biomarker classification tier directly following the standard AMP guidelines.
And the clinical significance for a tier level of a biomarker is determined by indications for the treatment as well as a prognostic and diagnostic outcome. So this includes the sensitivity of resistance to a particular drug or treatment for example, now to reach tier level 1 or Tier 1 of a variant with strong clinical significance would require a meeting the standards for level A or level B evidence IE known FDA approved therapies or well powered studies with expert consensus while, tier two variants of potential significance require level C or D evidence being FDA approved treatments for a different tumor type, investigational therapies, multiple published studies with consensus or preclinical trials with case reports. Tier 3 variants of unknown clinical significance will have little or no presence in general frequency tracks nor in cancer specific databases and no publications with this cancer associations. And then lastly tier 4, anything that would be benign or likely benign due to high allele frequency and population databases and have no published evidence for association of cancer either. The nice thing is this graphic does a really great job of simplifying the understanding of how to reach each of these tiers. However, the reality is that the capture of all of this relevant evidence to determine this tier is quite a bit of a large undertaking, hence the need for not only automating the classification process a bit but also the automation of presenting all of this relevant content into a final clinical report. Now let's discuss how we navigate through this tertiary process to go from a list of variants to that final clinical report.
From a workflow perspective variance selected for interpretation in VSClinical will initially pass through a user-defined filter chain. This could include eliminating low-quality variants common or known benign variants for example. You can essentially build up that filter chain to kind of set up any criteria that you'd like to and then standardize it for your workflow. The filtered variance then can be pulled into VSClinical for the processing of the somatic and germline guidelines for the AMP and ACMG guidelines respectively, essentially users will be guided through all the available evidence for the variant or biomarker in a streamlined fashion that really locks in that consistency for that interpretation process and after reaching that final classification and interpretation, you can very easily render that clinical report.
And what we're going to be focusing on today for the demo is kind of these areas of stages three and four where we're not going to worry so much about the workflow that's been pre-designed but rather the interpretation process and reporting process. Now this interpretation deep dive takes place in VSClinical and it's worth discussing the value points and having a true guideline interface. First off you really want to maintain consistency in the results, this is relevant, for a single user who's may be suffering from potential workflow fatigue or comparing multiple users interpretations. More discreetly is the added value of getting new users familiar with the guidelines more quickly. The interpretation hub serves as a fantastic educational interface to account for all the relevant guideline criteria to get new users or maybe aren't well-versed in the guidelines up to speed very quickly and essentially give you more capability of less users doing more analysis. Lastly but critical is the support of integrating these guidelines into the software ourselves, which you know, as the software provider Golden Helix, we want to do that process for you. Our goal is to implement these guidelines into the software so that you get to spend more time processing variance, and less time worrying about tweaking some bioinformatic pipeline. And to shorten that path to final variant interpretation for classification VSClinical presents a simple layout of all the available criteria from a long list of these databases. Let's discuss a bit more of these sources that are relevant to the AMP guidelines.
The AMP guidelines process uses several public and proprietary databases that contain information about varying frequencies known somatic mutations, functional predictions and treatment and clinical trial information. This is a categorize list essentially that's meant to allow you to quickly break down all of these different databases that are relevant for each of their applications. We also support the clinical drug prediction annotations such as Drug Bank, PMKB and clinical trial information as well and all of that can provide insight into these FDA approved drugs for a given gene or biomarker.
The nice thing though is what's not currently seen on this list but is another really really powerful tool that's going to help you come down to final interpretation and subsequently a clinical report is utilizing our hosted a cancer KB catalog. And so the Golden Helix cancer KB catalog is accessible for any Golden Helix user with purchase of the AMP guidelines. This catalog is a carefully reviewed data set containing assessments of biomarkers and genes in the context of specific cancers, including information on available treatments. This catalog is built on by an expert panel of curators and professionals in the clinical context that aggregate and write up all these interpretations for the submitted biomarkers and genes; such as this interpretation here for BRAF 680 Melanoma. Additionally, users of the AMP feature can is essentially choose to integrate their interpretations into this database anonymously where they're going to be reviewed by our curators here and updated on a regular basis to kind of serve as this ever growing evolving cancer resource.
So major application value here is that the interpretations in the CancerKB catalog can be a starting point for a lab to finalize an interpretation and streamline that progress to the final report.
And then the final stage again is this report for all of this tertiary process is to generate this comprehensive clinical report on that biomarker. The relevant criteria and classification are automatically pulled into the final report which reinforce the standardization and consistency for example, by reducing the copy and paste element of producing a clinical report. Moreover the cancer KB database will only speed up this process by leveraging that expert knowledge of previously interpreted genes and biomarkers. So this AMP guideline package has received a major upgrade with reports providing easily customizable word and PDF formats more detailed customizations are always optional with some JavaScript and HTML experience that allow for the inclusion of any other additional sample fields or VarSeq project fields that are available and we'd be more than happy to assist you with that as well.
So now it's switching gears for today's presentation. We're going to be exploring that third and fourth stages of our workflow, which is going to be focused on the interpretation and the reporting stages for a few different biomarkers through our AMP guidelines; many of the workflow steps for filtering and covered statistics where pre-run for the demonstration purposes. I've actually brought in all of these different biomarkers into a single sample and this includes a BRAF V600E, ERBB2 amplification and BCR-ABL1 gene fusion as well as an additional germline variant set up as a secondary finding. First, we're going to cover the input of patient level information, review the coverage statistics, review also the variance that we selected to deep dive into this interpretation process and then get down to that final stage of producing a clinical report. So, let's go ahead and open up our VarSeq project.
I wanted to take a moment for anybody who has maybe never seen VarSeq for the first time, is to kind of orient anybody to what we're looking at with the VarSeq project. So essentially past the point of importing where you have the full reporting of the full summary information of all the samples and variance that you've imported into the project. You're going to see the list of those variants reported in the variant table here on a per row basis. Now in addition to all the VCF level and annotation and algorithm information that's available in VarSeq you're essentially using any one of these individual fields or headers as a criteria for setting up the filter chain. So a very brief example here, I've got variants that I'm looking at that are ideal for high quality, past the variant caller, genotype quality, read depth so on and so forth and then essentially you use this filter chain to narrow down to a search of clinically interesting variants that you would pull into the VSClinical hub here for that interpretation deep dive and this is where things are really going to get started for us. So, I'm going to go ahead and take my AMP guideline window and I'm going to merge it here with my variant table. And I'll go ahead and shrink my filter chain.
Additionally, I want to zoom in here just to make it a little bit easier for everybody to read the first tab in the AMP guideline interface is essentially to set up the all the details that we have for the sample and the patient and we go in and pre-fill all this information out. This will actually carry over into our clinical report automatically for us when we get to this final tab. So, if we scroll down here, I've already kind of pre-filled in a lot of this information for the sample and patient. But what I also wanted to really focus on is how you're going to be directing the rest of the grab or the interpretation process, the collection of all of this information not only focused on the tissue type that specific for that sample, but also what the tumor type is that you're investigating. You can see here I'm kind of guiding this project down this path where I want to focus on Melanoma.
Beyond that I can look at basic variants statistics for this NGS sequencing summary. Not only see what those allele frequencies are for the variance but also what types of variants that we’re looking at and beyond that what's always relevant to go into these gene panel or somatic gene panel reports is what we're looking at in terms of coverage. So in this case, we've got fairly good coverage here with the mean depth of over 3800 X and if we scroll down here we can see kind of if we wanted to change in these criteria for desired averages of required minimums everything here checks out for those regions that were looking at with ample coverage.
Now next beyond the patient tab is we want to go into the mutation profile and we want to investigate or basically pull up the list of the variants that we want to build up in a classification and interpretation for here. So, this first section here for small variance. I've got my BRAF v600E with its position and chromosome that it's in and I'm going to label it as somatic. One indication for that. Is this low allele frequency state of 6.18%. This will follow this trajectory of getting an oncogenicity scoring and what we're going to process is a biomarker for that final clinical report. Likewise, I also have a RAF1 variant here, which is a really good example of just kind of straight up pathogenic straightforward germline suspected variant that we want to label as secondary germline.
Additionally, I've brought in a CNV manually into the project. And so there's a couple different options of how you can bring in your CNVs into the AMP guideline interface. One is manually and I'll just show you an example of that here and I'll use this BRCA2 example here. So, if you see this manual submission for these CNVs is actually very helpful in capturing, what the CNV is that you're looking for. And so, you can see here not only which gene we're in, what the clinically relevant transcript would be the total number of exons but also, which ones we’re focusing on for the event. Are we looking at a deletion or duplication? And what are the metrics that go in to reinforce that call? Likewise, VarSeq has all that capability of doing this NGS CNV detection. You could always add those CNVs directly from your project as well. Beyond that I've also included a gene fusion with ABL1 fuse with BCR and we can always investigate that to as an additional biomarker.
So, the next stage here is you know, we're going to eventually go in and assess these biomarkers to get ready for that clinical report. But we also want to investigate for these two example variance here is what the pathogenicity scoring an oncogenicity scoring would be. If I click over here to the variant tab, before we deep dive into our somatic BRAF V600E variant, I actually wanted to switch gears and just quickly go through our secondary germline, so then we can make our focus on somatic beyond that point. So, if we're investigating this germline suspected RAF1 variant here, the reason I wanted to show this is I know that some of you on the call might be familiar with the ACMG guidelines that we had previously released with an earlier version of the software. The thing is if you ever want to do want to investigate the somatic variants as well, you're not losing that capability in the AMP guideline interface to process those germline criteria as well. So you can kind of do everything from this AMP guideline interface. And essentially what we're doing is going to be processing this missense variant and RAF1 to see what kind of impact this variant has and what that final classification would be here. And we're already kind of directed down to what we're saying is pathogenic, following those direct ACMG rule logics that get us to this classification, due to all of this criteria that we've kind of collected here. Just to quickly kind of go through each of these criteria PM2 or essentially wanted to assess is this a common variant or is it novel with population frequencies. PM1 how many other variants do we have nearby that are potentially pathogenic and is there any consideration for those variants to be benign? PP2 supporting, are we in a region that has a low rate of benign and is this gene known to be sensitive to missense variants is a common mechanism of disease. PP3 essentially saying SIFT and Polyphen2, GERP and PhyloP both collectively predict this variant to be both damaging and conserved across different species. And then more importantly to get to this final stage of pathogenic, you really need the submissions of these variants as been reported, you know with track like Clinvar for example, which is allowed us to add this PS1 and PM5 criteria same variant that we're looking at as been previously reported. We've gone through and done all that literature review, different variant different amino acid residue but still labeled pathogenic allows us to bring in that PM5 moderate level evidence. So, all of this kind of stacks up to together to get us to this final rule logic being met to pathogenic.
If we scroll down here essentially what we're looking at is the final interpretation that we've built up for this germline variants, which is including all of these factors that we've just talked about automatically pulled into that interpretation where you can store into an assessment catalog for future reference. So the context with that is if you or somebody else finds this specific variant again, and you're using that shared catalog, this interpretation would automatically populate this section and you could always go in and review that, add any new evidence that's available and just keep referencing the pre-existing knowledge base or adding more to it. The nice thing about this is, we not only show you all the criteria that's available to get to that final classification, but then you can always go in a deep dive to each of these sections. In this case gnomAD and 1000 Genomes in terms of population frequency saying that this is a novel variant makes it very straight forward there for grabbing PM2, what impact does this variant have on the gene if we want to look at other missense variants and RAF1 in this exon that are pathogenic we can very quickly see what's nearby that would be likely pathogenic or pathogenic with multiple rating status as well. And that allows us to choose here, we're in a mutational hotspot, and there's no known benigns in that region.
Beyond that like I mentioned with the z-score, high z-score indicative of a low rate of benign as well as a gene that is sensitive to missense variants because we have 38 other pathogenic missense variants in this gene that are labeling it as a common mechanism of disease. And if we get down to the computational evidence SIFT and Polyphen predicted as damaging and PhyloP and GERP are predicted conserved. We're not affecting any splice site here though; you could very easily assess that if you were. And then if we're considering the supporting evidence of PP3 multiple lines of computational evidence say that this is a deleterious effect on the gene so we can say yes to that. Beyond that is of course what I was mentioning with the PS1 and PM5 criteria, is there a previously established pathogenic variant that exist? In this case, yes, and we verified that with all of our literature review to bring in that PS1 as well as interestingly the PM5 for a different residue as well.
We got all that accounted for here for our germline variant and we're basically going to leverage what our final pathogenic classification would be and include all of this information into this full comprehensive interpretation into our clinical report.
Let's go ahead and switch gears now to our somatic variant BRAF. Very similarly in orientation but different because we're going through the AMP guidelines for somatic basis. What we're landing on now is a final oncogenicity score and you can see here pretty substantial with a value of 10, which is really a summation of each of these criteria added up collectively looking against somatic catalogs and similar context but a little bit different we can always go and look at those differences if we deep dive here below. The nice thing about this, is any of these interpretations that you're building up for these variants as a biomarker you can see here where we're storing that interpretation specific to that tissue type that we're referencing for melanoma and what that scope is for the actual biomarker itself, any of the oncogenicity evidence that were reviewing we can basically add that to our interpretation.
For example, if I take this and copy this text here I can go in and review and save that interpretation after reviewing all of this criteria, which we'll get to here shortly. But I just wanted to show you this really quickly, if I click on review and save, now, this is where you have access to submit those interpretations to that cancer KP track and so leaving tracked you'll submit this to us, we'll go through and review this and keep growing this evolving cancer KB catalog. So, everyone has access to the most comprehensive nature of both the biomarkers in the genes themselves.
I'm going to go ahead and just keep this, discard these changes and we'll just keep this as it was, but if I scroll down and we review all of this criteria, we're saying, is it occurring at a high frequency with cancer catalogs or somatic catalogs? In this case, yes, this BRAF V600E variant is an over 28,000 samples and that frequency threshold is well beyond that 35% I believe that's an ICGC. If we look at these different tissue types, for example, we go here we can see that it's pretty prevalent in skin. So, this allows us to choose that somatic catalogs plus three scoring up to the get what that final value was with 10. Beyond that germline population catalogs basically, how common is this? Pretty rare in this case with South Asian population of a percentage of .003% and then Novel and 1000 Genomes. This allows us to not do any subtractive criteria that would kind of lower our oncogenicity scoring, we're essentially saying don't impact it. It's essentially rare novel.
And then if we go into the relevant clinical assessments, we see that we have submissions of this variant pretty routinely with things like ClinVar and really just to kind of wrap this up quickly, we can see that it's been previously classified as pathogenic not only in ClinVar, but also in CiVIC as well.
What impact do we have on the gene? Just like what we were looking at before with that RAF1 variance. We're looking at other missense variants in the same exon on that might be pathogenic and we definitely have a missense sensitive area for sure.
And that allows us to say yeah, we have some nearby pathogenic missense variants in this region. 16 variance within 6 amino acids actually. And then are we affecting a hot spots or cancer binding site or an active binding site rather? And in this case, it's both are true, so that allows us to say hotspot region + 1 as well as active region + 1 additionally. And the computational evidence again, damaging, conserved, whether we're affecting a splice site (in this case we're not) and then adding that In-silico predictions + 1 all agree on a deleterious effect, but without that splice site. So just to kind of quick review of all that criteria that essentially gets us up to that final oncogenicity scoring. So now we get to get deep dive into the biomarker section.
You'll see that we have these biomarkers that we need to review. And the first one we're going to go in here and review as this BRAF v600e variant. If I scroll down here just to show you the first section that we're going to be reviewing is really just the gene itself. So any hallmarks about the gene, what are different transcripts that we might want to account for anything else, you know, what BRAF is known to been reported as a fusion with any other gene that exists but really what you want to get down to is the gene summary of the nature of everything that we know about the gene. The nice thing about this is if we go to BRAF function and descriptions, we could basically pull in a comprehensive understanding of the gene with all these different databases that we have available here. And beyond that is the submission for these things that are stored in that cancer KB. So, whether you're looking at this in terms of all cancers, anything that's specific to melanoma or maybe different cancer types, like non-small cell lung cancer for example. The nice thing is if you match this to the specific tissue type that you're looking at it'll automatically pull in the interpretation for that gene if it already exists and that CancerKB.
So if we click on all cancers, there's a different interpretation here for BRAF and more general context for all cancers as opposed to this specific one for melanoma. So, we can leverage this, we can add more information to it and then you can submit that to us and then we'll go in and get that stuff curated as well. Beyond the gene interpretation in general, you also want to look at the kind of consideration for that gene is an alteration of frequency and outcomes. The nice thing about this is you can look at that the consideration for that gene in terms of the different tissue that you're looking at with these different databases like MSK, how commonly is it seen the frequency there? And related interpretations for different tumor types as well? In this case, we're just sticking with that melanoma and what that will be brought in from that cancer KB submission also.
And then beyond the gene itself we go into the actual biomarker summary where we can say here, what do we know about BRAF V600E in terms of melanoma, and we have that final oncogenicity scoring. The other thing that's nice about this is you can see the catalog section that show you how often are you seeing this in skin for ICGI, MSK Impact and kind of get an understanding of what the distribution is of that variant across these different somatic tracks. Additionally, you can always see what is not only V600E submission is known information about melanoma. But if we scroll over you can see what other variants have that impact to. So what impact is V600K have? What impact does V600R have? All in melanoma things that you might also want to consider and bring it in for that interpretation as well as the impact V600E might have on different tumor types also.
So all of these interpretations essentially have been pre-captured and I'm going to leverage these for the report section. But before we get to that I wanted to show you here, the section for the drug sensitivity, drug resistance and prognostic, diagnostic interpretation. This is really what we're after, it’s not only what can we look at for treatment options for anything that's melanoma, but matching our specific mutation and that evidence type, is being drug sensitivity in this case really after this? What are the list of the FDA approved sources that we can use for these drugs and pull in the list of all the drugs that we have available for treatments and sensitivity and all the information and details that we need for that interpretation building there.
So you can see here, there's quite a list to go through in terms of this BRAF V600E. Beyond that you can also leverage the information that coming in from the cancer KB in terms of these treatment options as well and then factor for any things like drug resistance has the all that prognostic and diagnostic data also. And then with the full understanding of the impact of these treatment options and what's available to us, we can then subsequently select that tier level to say, yeah, we're at Tier 1 level A, plenty of FDA approved therapies are included in a professional guidelines for this tumor type. So its a good streamline example of a tier one level classified biomarker in this case.
Now if we switch gears to look at these other biomarkers, I took a little bit more general search across the information here because maybe things weren't so much related in terms of melanoma, but you can basically get the same context the interpretation of the ERBB2 amplification the gene itself, what all do we know about it with the function descriptions here across these different databases and then what we'd want to save in terms of all cancers or maybe specific cancer types if it's relevant to that sample that looking at our that patient that you're looking at. And like we're looking at before I've got all these interpretations pre-loaded and then I can go down to the drug treatment or the drug sensitivity section and just look across all these different tissue types for anything that would be matching this mutation. This specific CNV amplification for those FDA-approved drugs. Last but not least of course is looking at the gene fusion in this context as well. What else do we know about FDA approved therapies for anything that would be a fusion between these two chains ABL1 with BCR. So essentially going through and getting the final interpretations for each of these different kinds of biomarkers assessing that oncogenicity scoring as well as that pathogenicity scoring for the germline variants. What we really want to land in now is this final stage of producing a clinical report.
If I click on this here if there was any kind of review changes or anything else that you had modified you'll basically be prompted with that. You need to go back and review that and submit it and save it before you render the report. Otherwise, if everything looks good, we can check this out. We basically say that we have FDA approved treatments for these patients biomarkers that match that tumor type and we also have some secondary germline mutations as well. And so, we go in here and sign out and finalize this report confirm that sign out.
And then we can go into our word template-based kind of rendering. If I click on this, cancer report template V1. Here's your starting point, you open up this location, you're essentially going to be opening up where your report template is in a windows directory or a directory that gets you to that file. So, let me open up that here and drag it over, here is our report template, so you can very easily go in here and customize this to meet all the standards that you need for your desired report template format. And then once this is in place and everything's good to go and you'll include not only all that patient in sample level information results summaries the list of all the somatic and germline variants that you've collected all the interpretation and treatment options that are available as well as all the coverage statistics that you want to report on as well. Once this is good to go. You say render for the sample.
We'll go ahead and open up where our samples report ended it up and this is actually convenient, it ends up in the project folder that you develop for your VarSeq project itself. So, when I go ahead and open this up, click yes here and I'll drag over my report here.
Here is that report format that we were looking at and here is all the patient in sample level information the list of all those somatic variants that we had as well as that secondary germline variant, our full interpretation for BRAF V600E with all this treatment options and prognostic and diagnostic data as well as drug sensitivity or drug resistance rather and any of those other interpretations that we pulled in for these other somatic variants, as well as the covered statistics. This is meant to be a simple example of how to go through this streamline interpretation process and get down to the clinic reports. Likewise, there's just not enough time in the day, unfortunately for me to go through the whole process of building a project from scratch and get through all of this. But if you have any interest in setting up a one-on-one session may be handling your own variants in this interpretation process to get down to the report, we'd be more than happy to schedule a training call to go through that together.
Let's go ahead and switch back to the slides and I think Delaina wants to fill us in on some updated content and then we can give you guys a moment to add your questions if you have any as well. Yeah, thank you Darby, and as we mentioned earlier will be opening up for some Q&A. But just to give you also a quick moment to enter those into that questions pane. I think this would be a good time to talk about our summer AMP sale.
As you can see on your screen Golden Helix has put together a variety of software packages focused around this AMP workflow you just saw. On the screen you can see what's being offered in each column that is considered a pack. There are few important details that I would like to point out because they're really make this summer AMP sale different than anything we've seen Golden Helix offer before, so first all of these packages are offered with 15-month licenses. As Darby mentioned earlier, our software is on an annual subscription model. So, with the summer AMP sale packs, you are getting an extra three months on top of what we normally offer. And then second to sweeten the pot if you purchase a two-year license of any of these summer AMP packs will be topping it off with an additional six months free. So that would total a three-year license, which is incredible.
And third, of course these packages are limited, so you can see on the bottom of each column the grey box showing the number of packs remaining, so I would recommend not waiting too long if you are interested in any of these because they have been going quickly and assume it will continue that way. I will leave it at that and jump into our Q&A, but if you are interested in any of these you can go ahead and mention that in the questions pane, you can use the raise your hand feature and you can email your account director or [email protected]. But basically, this is just a great time to implement the AMP workflow. So please let us know if you'd like to learn more about that.
Question 1: how easily can the report customizations be done? Honestly, the word-based report template interface makes things really easy to modify and I can show you actually pretty quick example of that. Let's go here if I open up that template file that we were looking at. A really simple case might be just changing the lab logo and you could basically move these fields anywhere, cut and paste them anywhere and reposition them, add any additional fields or get rid of them. But I'll just use this lab logo as an example if I go in here into the header and let me just delete this really quickly and I'll open up think I've got a screenshot. Yeah, that'll work. We'll use this. Copy that, enter that here and let me shrink this a little bit so it's a little bit more size friendly. But yeah, I mean you get a sense of basically going in modifying this however, you would like to and then you just save this template and then when you go in and render that it'll work off of this modify template that you would set up yourself. We can always do this in a more hands-on training if you have more specific considerations for what you would like to see with the customization.
Question 2: what steps should I go through to build an interpretation for a gene or biomarker I'm less familiar with? There’s a lot of content there, so when we're looking at the biomarker section, for example, even just in the gene summary, we will do a summary breakdown of all the information that we have available for these genes and you'll see even more of that in the context of Hallmark's and different transcripts and everything else. But one of the greater more powerful things that you're going to get going to have a kind of an already comprehensive nature of pulling all of that together. Let me go back to the BRAF one where we got that cancer KB submission. You could definitely go in and collect all the information that's coming in from the databases to grow this interpretation, which is going to be incredibly helpful. But what will streamline it even further is leveraging this cancer KB so we would definitely recommend, start using that early and using it as much as possible.
Question 3: how automated can we make this workflow? What is the least amount of manual processing when handling multiple samples through the workflow? A good insight into that if I go back to the slides here. Swap these screens really quick actually go back to our full stack slide. So when you look at the reality of the processing of the secondary stage for FASTQ down to VCF and BAM file, a lot of work has gone into demonstration at the recent webcast that we could always give you some insight on of how you go from the sequencer output to produce that FASTQ file and produce that BAM and VCF in a kind of an automated fashion. And that's actually pretty routinely setup, of you know, really you just want that pipeline to work and you don't want to touch it so much once it's going and it's doing what it should and then the other thing was Sentieon is there's also solutions for not only the germline variant callers, but also the somatic so it's directly relevant to this AMP guideline criteria beyond that you basically have the import files that are ready to go. So that automated importing and workflow utilization into the VarSeq perspective, is done with VSPipeline so that can be kind of auto fired trigger approach to where you take the in the export or the output of data from the secondary stage that automatically gets fed into the tertiary analysis and that essentially FASTQ to interesting variants that are filtered down that warrant the investigation for interpretation and classification in VSClinical, that's where things kind of get a little bit more manual so somebody might go in and directly monitor the output of the filtered set of variants and just validate that it works okay, and then somebody could quickly go through that interpretation process to make sure they're not missing anything for the variance and that deep dive to classification. Then that final stage of clinical reporting is you saw how quickly that was done and then all of this can be pumped up in some cases if you wanted to get that full sweet package up into warehouse that you can track those variants over time for the kind of evolving knowledge base. So, a lot of it, if not most of even 80% of it or more can be automated in the fashion that gets you just down to the list of variants that you have to investigate in a more manual fashion. We can always talk about that in more detail and we've got some example, webcast that explore those that capability as well.
Great. Thank you Darby. Unfortunately, we will have to go ahead and wrap things up and conclude today's presentation. However, there are a few questions that we weren't able to get to so we will be reaching out personally to each of you. We greatly appreciate anyone who would be willing to take our short survey, which will be popping up on a moment on your screen. We take your feedback seriously and use it to direct our future webcasts. Thank you in advance for doing that.
I would like to thank everyone for joining us today and thank you Darby for this excellent presentation. Look forward to seeing everyone on our next webcast and hopefully seeing your interest on the AMP packs. Have a great rest of your day.