Functions |
| def | loadPrimers |
| def | segToFrag |
|
def | profileCorrection |
|
def | smoothFragFile |
|
def | runDomainogram |
| def | density_to_countsPerFrag |
| def | workflow_groups |
Variables |
|
dictionary | processed = {'lib': {}, 'density': {}, '4cseq': {}} |
|
dictionary | regToExclude = {} |
|
list | new_libs = [] |
|
| job_groups = job.groups |
|
tuple | htss_mapseq = frontend.Frontend( url=mapseq_url ) |
|
dictionary | run_domainogram = {} |
|
tuple | before_profile_correction = group.get('before_profile_correction',False) |
|
| via = via) |
|
list | density_files = [] |
|
list | libname = mapseq_files[gid] |
| tuple | density_file |
| tuple | description |
|
dictionary | futures = {} |
|
tuple | file1 = unique_filename_in() |
|
tuple | file2 = unique_filename_in() |
|
tuple | file3 = unique_filename_in() |
|
list | nFragsPerWin = group['window_size'] |
|
tuple | resfile = unique_filename_in() |
|
dictionary | futures2 = {} |
|
list | profileCorrectedFile = processed['4cseq'] |
|
list | bedGraph = processed['4cseq'] |
|
list | grName = job_groups[gid] |
|
tuple | file4 = unique_filename_in() |
|
list | regCoord = regToExclude[gid] |
|
int | script_path = 10 |
|
list | resFiles = [] |
|
list | logFile = f[1] |
|
| start = False |
|
list | tarname = job_groups[gid] |
|
tuple | res_tar = tarfile.open(tarname, "w:gz") |
|
tuple | s = s.strip() |
|
string | step = "density" |
|
string | fname = "density_file_" |
|
string | groupId = "sql" |
|
tuple | wig = unique_filename_in() |
|
string | comment = "all informative frags - null included" |
|
tuple | trsql = track.track(resfiles[3]) |
|
tuple | bwig = unique_filename_in() |
|
tuple | trwig = track.track(bwig,chrmeta=trsql.chrmeta) |
|
dictionary | selection = {'score':(0.01,sys.maxint)} |
|
list | reportProfileCorrection = resfiles[1] |
|
list | smoothFile = resfiles[0] |
|
list | afterProfileCorrection = resfiles[1] |
|
tuple | nFrags = str(job_groups[gid]['window_size']) |
|
tuple | tarFile = resfiles.pop() |
Detailed Description
=======================
Module: bbcflib.c4seq
=======================
This module provides functions to run a 4c-seq analysis
from reads mapped on a reference genome.
Function Documentation
| def bbcflib::c4seq::density_to_countsPerFrag |
( |
|
ex, |
|
|
|
file_dict, |
|
|
|
groups, |
|
|
|
assembly, |
|
|
|
regToExclude, |
|
|
|
script_path, |
|
|
|
via = 'lsf' | |
|
) |
| | |
Main function to compute normalised counts per fragments from a density file.
| def bbcflib::c4seq::loadPrimers |
( |
|
primersFile |
) |
|
Create a dictionary with infos for each primer (from file primers.fa)
| def bbcflib::c4seq::segToFrag |
( |
|
countsPerFragFile, |
|
|
|
regToExclude = "", |
|
|
|
script_path = '' | |
|
) |
| | |
This function calls segToFrag.awk (which transforms the counts per segment to a normalised count per fragment).
Provide a region to exclude if needed.
| def bbcflib::c4seq::workflow_groups |
( |
|
ex, |
|
|
|
job, |
|
|
|
primers_dict, |
|
|
|
assembly, |
|
|
|
mapseq_files, |
|
|
|
mapseq_url, |
|
|
|
c4_url = None, |
|
|
|
script_path = '', |
|
|
|
logfile = None, |
|
|
|
via = 'lsf' | |
|
) |
| | |
Main
* open the 4C-seq minilims and create execution
* 0. get/create the library
* 1. if necessary, calculate the density file from the bam file (mapseq.parallel_density_sql)
* 2. calculate the count per fragment for each denstiy file with gFeatMiner:mean_score_by_feature to calculate)
Variable Documentation
| list bbcflib::c4seq::density_file |
Initial value:00001 parallel_density_sql( ex, mapseq_files[gid][rid]['bam'],
00002 assembly.chromosomes,
00003 nreads=mapseq_files[gid][rid]['stats']["total"],
00004 merge=0,
00005 convert=False,
00006 via=via )
| tuple bbcflib::c4seq::description |
Initial value:00001 set_file_descr("density_file_"+libname+".sql",
00002 groupId=gid,step="density",type="sql",view='admin',gdv="1")