https://ctd.inp.nsk.su/wiki/api.php?action=feedcontributions&user=A.M.Suharev&feedformat=atomCharm-Tau Detector - User contributions [en]2024-03-29T01:27:57ZUser contributionsMediaWiki 1.19.24https://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2024-02-15T05:41:53Z<p>A.M.Suharev: /* Build and run */</p>
<hr />
<div>== Register and prepare account ==<br />
* Register at BINP/GCF cluster. Address to [[User:A.M.Suharev|Andrey Sukharev]] or [[User:D.A.Maksimov|Dmitry Maximov]]<br />
<br />
* Log in to <code>stark.inp.nsk.su</code> (BINP local access only) or <code>proxima.inp.nsk.su</code> (accessible from the Internet, pubkey authentication only)<br />
<br />
The login servers have similar configuration and share common <code>/home</code>.<br />
<br />
The <code>git</code> is accessible using <code>ssh</code> protocol with key authorization.<br />
<br />
* If necessary, create <code>ssh</code> key using<br />
ssh-keygen<br />
<br />
Agree to everything, passwordless keys are allowed.<br />
<br />
Then you'll have in your <code>~/.ssh/</code> two files <code>id_rsa</code> and <code>id_rsa.pub</code> - these are your private and public keys.<br />
Full paths to the files are displayed in the terminal.<br />
<br />
* Log in to <code>gitlab</code> server https://git.inp.nsk.su/ using the same name/password as for <code>stark</code><br />
* Register your public key for your account: i. e add ~/.ssh/id_rsa.pub contents to a from at the following link<br />
https://git.inp.nsk.su/-/profile/keys<br />
<br />
<br />
== Setup remote ssh connection via PuTTy on Windows ==<br />
<br />
* Run PuTTy<br />
* In the Session tab, in the "Host Name (or IP addres)" field, write: user_name@proxima.nsk.su<br />
Use PuTTyGen program to create public and private key<br />
The public key must be sent to [[User:A.M.Suharev|Andrey Sukharev]]<br />
* In the Connection -> SSH -> Auth tab, in the "Private key file for authentication" field, write the path to your private key<br />
* In the Session tab in the "Saved session" field, write: a name that will be used for the current settings<br />
* In the Session tab, click on the "Save" button<br />
<br />
== Create working repository (fork) ==<br />
Open central repository<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
and make fork.<br />
<br />
<br />
== Tune working environment ==<br />
Each time you log in to a server <code>stark</code>/<code>proxima</code>, you need to<br />
setupSCTAU<br />
asetup ''The software version''<br />
<br />
Available version are at least:<br />
* 0.2.3 ``release'' - the environment before first official release<br />
asetup Aurora,0.2.3<br />
* 1.0.0 release - fixed build for detector investigation tasks, for physics and <br />
other task requiring stable environment. The 1.0.X series are intended for bug fixes.<br />
asetup Aurora,1.0.0<br />
* master - branch and build for future development without any warranty on <br />
stability, compatibility or meaningful working.<br />
asetup Aurora,master,latest<br />
<br />
Ask software coordinators if you're unsure what to put as ''the software version''.<br />
<br />
To tune git do once:<br />
<br />
First, do:<br />
git sctau init-config<br />
<br />
Check if the settings are correct. But the defaults should be fine.<br />
<br />
Then apply the settings:<br />
git sctau init-config --apply<br />
<br />
== tune working directory ==<br />
To develop new code or to modify existing one, do:<br />
<br />
create workare in your home directory (at <code>stark</code>/<code>proxima</code>)<br />
mkdir workarea<br />
cd workarea<br />
<br />
create directories for build and run:<br />
mkdir build run<br />
<br />
workarea preparation (just once):<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
<br />
Go to working directory<br />
cd aurora<br />
<br />
Fetch updates from head repository (must be done before '''creating a branch''' when the workarea exists for a long time)<br />
git fetch upstream<br />
<br />
Create a working branch. Give it a sensible name:<br />
git checkout -b <TopicDevelopmentBranch> upstream/<target_branch> --no-track<br />
<br />
<br />
'''The line above will not work if you simply copy-paste it. It is intentional. It is important to choose correct target branch. If unsure ask software coordinators.'''<br />
<br />
<br />
To modify an existing package check it out:<br />
git sctau addpkg GenExamples<br />
<br />
To create new package, one should create its whole directory structure, provide CMakeLists.txt and everything.<br />
<br />
''Before creating new package ask software coordinators how to name and where to place it''<br />
<br />
Useful commands to manage packages in you working area:<br />
<br />
List local packages<br />
git sctau listpkg<br />
<br />
List packages in the whole repository<br />
git sctau listpkg --all<br />
or with regexp filter<br />
git sctau listpkg --all 'Det'<br />
git sctau listpkg --all '/G4.*U'<br />
<br />
<br />
Remove local package (the repository left intact)<br />
git sctau rmpkg <PackageName><br />
<br />
It is recommended to keep locally only the packages under active development.<br />
<br />
== Build and run ==<br />
To build:<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
To setup local environment (to use locally built packages instead of default versions):<br />
source x86_64-slc7-gcc9-opt/setup.sh<br />
<br />
To run<br />
cd ../run<br />
<br />
Run primary generators:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Run full simulation:<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
Download standard job options and run with local file:<br />
get_joboptions fullsim_example.py<br />
ctaurun ./fullsim_example.py<br />
<br />
== Commit changes ==<br />
In the 'aurora' directory:<br />
<br />
Add changed files<br />
git add <list of files><br />
<br />
remove unnecessary files<br />
git rm <list of files><br />
<br />
<br />
Commit changes in the local repository:<br />
git commit -m 'Meaningful message concerning the introduced changes'<br />
<br />
Each commit should contain a minimal set of logically interconnected changes.<br />
You may (and will) have many commits when developing a package.<br />
<br />
Put the changes to the server:<br />
git push<br />
<br />
Wheh doing push for the first time, it is necessary to do it like<br />
git push --set-upstream origin <TopicDevelopmentBranch><br />
<br />
<br />
After the push, <code>git</code> displays a link to be followed for a <code>Merge Request</code> (adding your contributions to common repository).<br />
<br />
When doing a merge request, make sure:<br />
* the branch is correct<br />
* changes are well described<br />
** reasons for the changes are demonstrated<br />
** the changes are described<br />
** influence on the other software and possible side effects are indicated<br />
** there are links to related issues, if any<br />
<br />
== Running graphical applications remotely ==<br />
If your connection does not allow you to run X applications directly, please try [[x2go]].<br />
<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2024-02-15T04:49:58Z<p>A.M.Suharev: /* Build and run */</p>
<hr />
<div>== Register and prepare account ==<br />
* Register at BINP/GCF cluster. Address to [[User:A.M.Suharev|Andrey Sukharev]] or [[User:D.A.Maksimov|Dmitry Maximov]]<br />
<br />
* Log in to <code>stark.inp.nsk.su</code> (BINP local access only) or <code>proxima.inp.nsk.su</code> (accessible from the Internet, pubkey authentication only)<br />
<br />
The login servers have similar configuration and share common <code>/home</code>.<br />
<br />
The <code>git</code> is accessible using <code>ssh</code> protocol with key authorization.<br />
<br />
* If necessary, create <code>ssh</code> key using<br />
ssh-keygen<br />
<br />
Agree to everything, passwordless keys are allowed.<br />
<br />
Then you'll have in your <code>~/.ssh/</code> two files <code>id_rsa</code> and <code>id_rsa.pub</code> - these are your private and public keys.<br />
Full paths to the files are displayed in the terminal.<br />
<br />
* Log in to <code>gitlab</code> server https://git.inp.nsk.su/ using the same name/password as for <code>stark</code><br />
* Register your public key for your account: i. e add ~/.ssh/id_rsa.pub contents to a from at the following link<br />
https://git.inp.nsk.su/-/profile/keys<br />
<br />
<br />
== Setup remote ssh connection via PuTTy on Windows ==<br />
<br />
* Run PuTTy<br />
* In the Session tab, in the "Host Name (or IP addres)" field, write: user_name@proxima.nsk.su<br />
Use PuTTyGen program to create public and private key<br />
The public key must be sent to [[User:A.M.Suharev|Andrey Sukharev]]<br />
* In the Connection -> SSH -> Auth tab, in the "Private key file for authentication" field, write the path to your private key<br />
* In the Session tab in the "Saved session" field, write: a name that will be used for the current settings<br />
* In the Session tab, click on the "Save" button<br />
<br />
== Create working repository (fork) ==<br />
Open central repository<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
and make fork.<br />
<br />
<br />
== Tune working environment ==<br />
Each time you log in to a server <code>stark</code>/<code>proxima</code>, you need to<br />
setupSCTAU<br />
asetup ''The software version''<br />
<br />
Available version are at least:<br />
* 0.2.3 ``release'' - the environment before first official release<br />
asetup Aurora,0.2.3<br />
* 1.0.0 release - fixed build for detector investigation tasks, for physics and <br />
other task requiring stable environment. The 1.0.X series are intended for bug fixes.<br />
asetup Aurora,1.0.0<br />
* master - branch and build for future development without any warranty on <br />
stability, compatibility or meaningful working.<br />
asetup Aurora,master,latest<br />
<br />
Ask software coordinators if you're unsure what to put as ''the software version''.<br />
<br />
To tune git do once:<br />
<br />
First, do:<br />
git sctau init-config<br />
<br />
Check if the settings are correct. But the defaults should be fine.<br />
<br />
Then apply the settings:<br />
git sctau init-config --apply<br />
<br />
== tune working directory ==<br />
To develop new code or to modify existing one, do:<br />
<br />
create workare in your home directory (at <code>stark</code>/<code>proxima</code>)<br />
mkdir workarea<br />
cd workarea<br />
<br />
create directories for build and run:<br />
mkdir build run<br />
<br />
workarea preparation (just once):<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
<br />
Go to working directory<br />
cd aurora<br />
<br />
Fetch updates from head repository (must be done before '''creating a branch''' when the workarea exists for a long time)<br />
git fetch upstream<br />
<br />
Create a working branch. Give it a sensible name:<br />
git checkout -b <TopicDevelopmentBranch> upstream/<target_branch> --no-track<br />
<br />
<br />
'''The line above will not work if you simply copy-paste it. It is intentional. It is important to choose correct target branch. If unsure ask software coordinators.'''<br />
<br />
<br />
To modify an existing package check it out:<br />
git sctau addpkg GenExamples<br />
<br />
To create new package, one should create its whole directory structure, provide CMakeLists.txt and everything.<br />
<br />
''Before creating new package ask software coordinators how to name and where to place it''<br />
<br />
Useful commands to manage packages in you working area:<br />
<br />
List local packages<br />
git sctau listpkg<br />
<br />
List packages in the whole repository<br />
git sctau listpkg --all<br />
or with regexp filter<br />
git sctau listpkg --all 'Det'<br />
git sctau listpkg --all '/G4.*U'<br />
<br />
<br />
Remove local package (the repository left intact)<br />
git sctau rmpkg <PackageName><br />
<br />
It is recommended to keep locally only the packages under active development.<br />
<br />
== Build and run ==<br />
To build:<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
To setup local environment (to use locally built packages instead of default versions):<br />
source x86_64-slc7-gcc9-opt/setup.sh<br />
<br />
To run<br />
cd ../run<br />
<br />
Run primary generators:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Run full simulation:<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
Download standard job options:<br />
get_joboptions fullsim_example.py<br />
<br />
== Commit changes ==<br />
In the 'aurora' directory:<br />
<br />
Add changed files<br />
git add <list of files><br />
<br />
remove unnecessary files<br />
git rm <list of files><br />
<br />
<br />
Commit changes in the local repository:<br />
git commit -m 'Meaningful message concerning the introduced changes'<br />
<br />
Each commit should contain a minimal set of logically interconnected changes.<br />
You may (and will) have many commits when developing a package.<br />
<br />
Put the changes to the server:<br />
git push<br />
<br />
Wheh doing push for the first time, it is necessary to do it like<br />
git push --set-upstream origin <TopicDevelopmentBranch><br />
<br />
<br />
After the push, <code>git</code> displays a link to be followed for a <code>Merge Request</code> (adding your contributions to common repository).<br />
<br />
When doing a merge request, make sure:<br />
* the branch is correct<br />
* changes are well described<br />
** reasons for the changes are demonstrated<br />
** the changes are described<br />
** influence on the other software and possible side effects are indicated<br />
** there are links to related issues, if any<br />
<br />
== Running graphical applications remotely ==<br />
If your connection does not allow you to run X applications directly, please try [[x2go]].<br />
<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2023-09-21T07:23:11Z<p>A.M.Suharev: </p>
<hr />
<div>== VM images ==<br />
The Aurora VM images set could be downloaded from [https://sct.inp.nsk.su/internal/vm_images.html the SCT site]. The set consists of:<br />
* <p>The system image<br>This is the basic Scientific Linux 7 image containing the software stack native for Aurora. The image is set up to automatically mount Aurora release image and home image if available. The installed system features MATE Desktop environment. The only existing user is the "liveuser" w/o password. The liveuser is allowed to perform the passwordless "sudo" thus making possible the image customization.<br>The installed system could obtain its network configuration via DHCP.</p><br />
<br />
* <p>The Aurora release image<br>The image contains the full tree of the specific Aurora release plus data files and external software required for Aurora operation. After the VM boot, the user may set up the environment in a [[Workflow quick reference|conventional way]]:<br />
setupSCTAU<br />
asetup Aurora,RELEASE_VERSION<br />
where RELEASE_VERISON is the three-digit release version identifier (i. e. '2.1.0').</p><br />
<br />
* <p>The home image<br>If you do not add the home image, all the files you produce working with the VM would go to the system image. The system image is kept rather small to facilitate downloading, so at some point you might find it full. To avoid this, we provide the empty home image containing just the home directory and extendible up to 100 GB.</p><br />
<br />
You may also create and add other images according to your taste.<br />
<br />
== VirtualBox Setup ==<br />
Here is steps for creation Aurara VM via Oracle VirtualBox<br />
<br />
* Run Oracle VM VirualBox Manager<br />
* Click "New" on main page:<br />
[[File:Newvbvm.png]]<br />
* In the open window:<br />
**name the virtual machine <br />
**select its directory<br />
**choose "Linux" in field "Type"<br />
**choose "Other Linux(64bit)" in field "Version"<br />
[[File:Namevbvm.png]]<br />
* Set the memory amount and the CPU cores number for the VM. 2 CPU and 2048 MB is generally enough:<br />
[[File:Cpuvbvm.png]]<br />
* Then choose "Use an Existing Virtual Hard Disk File" and specify the path to downloaded sl7.vdi image:<br />
[[File:Sl7vbvm.png]]<br />
<br />
Then click "finish", but before run need add (optionally)"home" and "Aurora release" images to created VM.<br />
<br />
To do that, on main page choose created VM and click "Settings".<br />
<br />
[[File:Setvbvm.png]]<br />
* Then, choose "Storage" tab and click "Adds hard disk" against "Controller: SATA". Add downloaded images.<br />
[[File:Satavbvm.png]]<br />
<br />
That's all. To run VM click "Start" on main page.<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM via conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded .qcow2 images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image":<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path:<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM. 2 CPU and 2048 MB is generally enough:<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images to the VM:<br />
[[File:newvmaddimage.png]]<br />
<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation":<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser":<br />
[[File:newvmready.png]]<br />
<br />
== Superuser access ==<br />
To obtain superuser access, use sudo.<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-12T04:25:37Z<p>A.M.Suharev: /* VM images */</p>
<hr />
<div>== VM images ==<br />
The Aurora VM images set could be downloaded from [https://sct.inp.nsk.su/internal/vm_images.html the SCT site]. The set consists of:<br />
* <p>The system image<br>This is the basic Scientific Linux 7 image containing the software stack native for Aurora. The image is set up to automatically mount Aurora release image and home image if available. The installed system features MATE Desktop environment. The only existing user is the "liveuser" w/o password. The liveuser is allowed to perform the passwordless "sudo" thus making possible the image customization.<br>The installed system could obtain its network configuration via DHCP.</p><br />
<br />
* <p>The Aurora release image<br>The image contains the full tree of the specific Aurora release plus data files and external software required for Aurora operation. After the VM boot, the user may set up the environment in a [[Workflow quick reference|conventional way]]:<br />
setupSCTAU<br />
asetup Aurora,RELEASE_VERSION<br />
where RELEASE_VERISON is the three-digit release version identifier (i. e. '2.1.0').</p><br />
<br />
* <p>The home image<br>If you do not add the home image, all the files you produce working with the VM would go to the system image. The system image is kept rather small to facilitate downloading, so at some point you might find it full. To avoid this, we provide the empty home image containing just the home directory and extendible up to 100 GB.</p><br />
<br />
You may also create and add other images according to your taste.<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM via conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded .qcow2 images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image":<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path:<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM. 2 CPU and 2048 MB is generally enough:<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images to the VM:<br />
[[File:newvmaddimage.png]]<br />
<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation":<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser":<br />
[[File:newvmready.png]]<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-09T07:41:43Z<p>A.M.Suharev: /* QEMU/KVM Setup */</p>
<hr />
<div>== VM images ==<br />
The Aurora VM images set could be downloaded from [https://sct.inp.nsk.su/internal/vm_images.html the SCT site]. The set consists of:<br />
* <p>The system image<br>This is the basic Scientific Linux 7 image containing the software stack native for Aurora. The image is set up to automatically mount Aurora release image and home image if available. The installed system features MATE Desktop environment. The only existing user is the "liveuser" w/o password. The liveuser is allowed to perform the passwordless "sudo" thus making possible the image customization.</p><br />
<br />
* <p>The Aurora release image<br>The image contains the full tree of the specific Aurora release plus data files and external software required for Aurora operation. After the VM boot, the user may set up the environment in a [[Workflow quick reference|conventional way]]:<br />
setupSCTAU<br />
asetup Aurora,RELEASE_VERSION<br />
where RELEASE_VERISON is the three-digit release version identifier (i. e. '2.1.0').</p><br />
<br />
* <p>The home image<br>If you do not add the home image, all the files you produce working with the VM would go to the system image. The system image is kept rather small to facilitate downloading, so at some point you might find it full. To avoid this, we provide the empty home image containing just the home directory and extendible up to 100 GB.</p><br />
<br />
You may also create and add other images according to your taste.<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM via conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded .qcow2 images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image":<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path:<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM. 2 CPU and 2048 MB is generally enough:<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images to the VM:<br />
[[File:newvmaddimage.png]]<br />
<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation":<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser":<br />
[[File:newvmready.png]]<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-09T07:39:30Z<p>A.M.Suharev: /* QEMU/KVM Setup */</p>
<hr />
<div>== VM images ==<br />
The Aurora VM images set could be downloaded from [https://sct.inp.nsk.su/internal/vm_images.html the SCT site]. The set consists of:<br />
* <p>The system image<br>This is the basic Scientific Linux 7 image containing the software stack native for Aurora. The image is set up to automatically mount Aurora release image and home image if available. The installed system features MATE Desktop environment. The only existing user is the "liveuser" w/o password. The liveuser is allowed to perform the passwordless "sudo" thus making possible the image customization.</p><br />
<br />
* <p>The Aurora release image<br>The image contains the full tree of the specific Aurora release plus data files and external software required for Aurora operation. After the VM boot, the user may set up the environment in a [[Workflow quick reference|conventional way]]:<br />
setupSCTAU<br />
asetup Aurora,RELEASE_VERSION<br />
where RELEASE_VERISON is the three-digit release version identifier (i. e. '2.1.0').</p><br />
<br />
* <p>The home image<br>If you do not add the home image, all the files you produce working with the VM would go to the system image. The system image is kept rather small to facilitate downloading, so at some point you might find it full. To avoid this, we provide the empty home image containing just the home directory and extendible up to 100 GB.</p><br />
<br />
You may also create and add other images according to your taste.<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM via conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded .qcow2 images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image".<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images.<br />
[[File:newvmaddimage.png]]<br />
<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation".<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser".<br />
[[File:newvmready.png]]<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-09T07:16:14Z<p>A.M.Suharev: /* VM images */</p>
<hr />
<div>== VM images ==<br />
The Aurora VM images set could be downloaded from [https://sct.inp.nsk.su/internal/vm_images.html the SCT site]. The set consists of:<br />
* <p>The system image<br>This is the basic Scientific Linux 7 image containing the software stack native for Aurora. The image is set up to automatically mount Aurora release image and home image if available. The installed system features MATE Desktop environment. The only existing user is the "liveuser" w/o password. The liveuser is allowed to perform the passwordless "sudo" thus making possible the image customization.</p><br />
<br />
* <p>The Aurora release image<br>The image contains the full tree of the specific Aurora release plus data files and external software required for Aurora operation. After the VM boot, the user may set up the environment in a [[Workflow quick reference|conventional way]]:<br />
setupSCTAU<br />
asetup Aurora,RELEASE_VERSION<br />
where RELEASE_VERISON is the three-digit release version identifier (i. e. '2.1.0').</p><br />
<br />
* <p>The home image<br>If you do not add the home image, all the files you produce working with the VM would go to the system image. The system image is kept rather small to facilitate downloading, so at some point you might find it full. To avoid this, we provide the empty home image containing just the home directory and extendible up to 100 GB.</p><br />
<br />
You may also create and add other images according to your taste.<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM using conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image".<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images.<br />
[[File:newvmaddimage.png]]<br />
<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation".<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser".<br />
[[File:newvmready.png]]<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-09T07:07:42Z<p>A.M.Suharev: /* VM images */</p>
<hr />
<div>== VM images ==<br />
The Aurora VM images set consists of:<br />
* <p>The system image<br>This is the basic Scientific Linux 7 image containing the software stack native for Aurora. The image is set up to automatically mount Aurora release image and home image if available. The installed system features MATE Desktop environment. The only existing user is the "liveuser" w/o password. The liveuser is allowed to perform the passwordless "sudo" thus making possible the image customization.</p><br />
<br />
* <p>The Aurora release image<br>The image contains the full tree of the specific Aurora release plus data files and external software required for Aurora operation. After the VM boot, the user may set up the environment in a [[Workflow quick reference|conventional way]]:<br />
setupSCTAU<br />
asetup Aurora,RELEASE_VERSION<br />
where RELEASE_VERISON is the three-digit release version identifier (i. e. '2.1.0').</p><br />
<br />
* <p>The home image<br>If you do not add the home image, all the files you produce working with the VM would go to the system image. The system image is kept rather small to facilitate downloading, so at some point you might find it full. To avoid this, we provide the empty home image containing just the home directory and extendible up to 100 GB.</p><br />
<br />
You may also create and add other images according to your taste.<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM using conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image".<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images.<br />
[[File:newvmaddimage.png]]<br />
<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation".<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser".<br />
[[File:newvmready.png]]<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-09T06:47:32Z<p>A.M.Suharev: /* QEMU/KVM Setup */</p>
<hr />
<div>== VM images ==<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM using conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image".<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images.<br />
[[File:newvmaddimage.png]]<br />
<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation".<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser".<br />
[[File:newvmready.png]]<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Newvmready.pngFile:Newvmready.png2022-12-09T06:46:08Z<p>A.M.Suharev: </p>
<hr />
<div></div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Newvmaddimage.pngFile:Newvmaddimage.png2022-12-09T06:45:12Z<p>A.M.Suharev: A.M.Suharev uploaded a new version of &quot;File:Newvmaddimage.png&quot;</p>
<hr />
<div></div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Newvmstartinstall.pngFile:Newvmstartinstall.png2022-12-09T06:39:27Z<p>A.M.Suharev: </p>
<hr />
<div></div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Newvmaddimage.pngFile:Newvmaddimage.png2022-12-09T06:38:07Z<p>A.M.Suharev: </p>
<hr />
<div></div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Newvmparam.pngFile:Newvmparam.png2022-12-09T06:37:16Z<p>A.M.Suharev: </p>
<hr />
<div></div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Newvmimage.pngFile:Newvmimage.png2022-12-09T06:36:50Z<p>A.M.Suharev: </p>
<hr />
<div></div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Newvm.pngFile:Newvm.png2022-12-09T06:35:55Z<p>A.M.Suharev: </p>
<hr />
<div></div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-09T06:35:32Z<p>A.M.Suharev: /* QEMU/KVM Setup */</p>
<hr />
<div>== VM images ==<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
Here we demonstrate briefly how to create the Aurora VM using QEMU/KVM using conventional Linux tool virt-manager, the libvirtd GUI.<br />
<br />
* Put the downloaded images to a configured libvirt storage directory (the default one is /var/lib/libvirt/images, requires root access).<br />
* Run virt-manager GUI, connect to local QEMU/KVM instance.<br />
* Create new virtual machine, choose "Import existing disk image".<br />
[[File:newvm.png]]<br />
* Choose the system image to provide the storage path<br />
[[File:newvmimage.png]]<br />
* Set the memory amount and the CPU cores number for the VM<br />
[[File:newvmparam.png]]<br />
* On the next step, give name to the VM, check "Customize configuration", optionally choose a network.<br />
* Before pressing "Begin Installation", add Aurora release and (optionally) home images.<br />
[[File:newvmaddimage.png]]<br />
To do that, press "Add Hardware", then choose "Storage", then "Select or create custom image", and the select the image.<br />
* Ensure you have added all images you need, and the boot device is the first image, and then click "Begin Installation".<br />
[[File:newvmstartinstall.png]]<br />
* The VM should boot shortly. Just press Enter when asked for password for "liveuser".<br />
[[File:newvmready.png]]<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-08T07:44:09Z<p>A.M.Suharev: </p>
<hr />
<div>== VM images ==<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==<br />
<br />
<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-08T07:41:29Z<p>A.M.Suharev: /* VirtManager Setup */</p>
<hr />
<div>== VM images ==<br />
<br />
== VirtualBox Setup ==<br />
<br />
== QEMU/KVM Setup ==</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-08T07:40:43Z<p>A.M.Suharev: </p>
<hr />
<div>== VM images ==<br />
<br />
== VirtualBox Setup ==<br />
<br />
== VirtManager Setup ==</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Aurora_VM_imagesAurora VM images2022-12-08T07:38:43Z<p>A.M.Suharev: Created page with "PLEASEWRITEME"</p>
<hr />
<div>PLEASEWRITEME</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2022-06-29T12:31:52Z<p>A.M.Suharev: /* Register and prepare account */</p>
<hr />
<div>== Register and prepare account ==<br />
* Register at BINP/GCF cluster. Address to [[User:A.M.Suharev|Andrey Sukharev]] or [[User:D.A.Maksimov|Dmitry Maximov]]<br />
<br />
* Log in to <code>stark.inp.nsk.su</code> (BINP local access only) or <code>proxima.inp.nsk.su</code> (accessible from the Internet, pubkey authentication only)<br />
<br />
The login servers have similar configuration and share common <code>/home</code>.<br />
<br />
The <code>git</code> is accessible using <code>ssh</code> protocol with key authorization.<br />
<br />
* If necessary, create <code>ssh</code> key using<br />
ssh-keygen<br />
<br />
Agree to everything, passwordless keys are allowed.<br />
<br />
Then you'll have in your <code>~/.ssh/</code> two files <code>id_rsa</code> and <code>id_rsa.pub</code> - these are your private and public keys.<br />
Full paths to the files are displayed in the terminal.<br />
<br />
* Log in to <code>gitlab</code> server https://git.inp.nsk.su/ using the same name/password as for <code>stark</code><br />
* Register your public key for your account: i. e add ~/.ssh/id_rsa.pub contents to a from at the following link<br />
https://git.inp.nsk.su/-/profile/keys<br />
<br />
== Create working repository (fork) ==<br />
Open central repository<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
and make fork.<br />
<br />
<br />
== Tune working environment ==<br />
Each time you log in to a server <code>stark</code>/<code>proxima</code>, you need to<br />
setupSCTAU<br />
asetup ''The software version''<br />
<br />
Available version are at least:<br />
* 0.2.3 ``release'' - the environment before first official release<br />
asetup Aurora,0.2.3<br />
* 1.0.0 release - fixed build for detector investigation tasks, for physics and <br />
other task requiring stable environment. The 1.0.X series are intended for bug fixes.<br />
asetup Aurora,1.0.0<br />
* master - branch and build for future development without any warranty on <br />
stability, compatibility or meaningful working.<br />
asetup Aurora,master,latest<br />
<br />
Ask software coordinators if you're unsure what to put as ''the software version''.<br />
<br />
To tune git do once:<br />
<br />
First, do:<br />
git sctau init-config<br />
<br />
Check if the settings are correct. But the defaults should be fine.<br />
<br />
Then apply the settings:<br />
git sctau init-config --apply<br />
<br />
== tune working directory ==<br />
To develop new code or to modify existing one, do:<br />
<br />
create workare in your home directory (at <code>stark</code>/<code>proxima</code>)<br />
mkdir workarea<br />
cd workarea<br />
<br />
create directories for build and run:<br />
mkdir build run<br />
<br />
workarea preparation (just once):<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
<br />
Go to working directory<br />
cd aurora<br />
<br />
Fetch updates from head repository (must be done before '''creating a branch''' when the workarea exists for a long time)<br />
git fetch upstream<br />
<br />
Create a working branch. Give it a sensible name:<br />
git checkout -b <TopicDevelopmentBranch> upstream/<target_branch> --no-track<br />
<br />
<br />
'''The line above will not work if you simply copy-paste it. It is intentional. It is important to choose correct target branch. If unsure ask software coordinators.'''<br />
<br />
<br />
To modify an existing package check it out:<br />
git sctau addpkg GenExamples<br />
<br />
To create new package, one should create its whole directory structure, provide CMakeLists.txt and everything.<br />
<br />
''Before creating new package ask software coordinators how to name and where to place it''<br />
<br />
Useful commands to manage packages in you working area:<br />
<br />
List local packages<br />
git sctau listpkg<br />
<br />
List packages in the whole repository<br />
git sctau listpkg --all<br />
or with regexp filter<br />
git sctau listpkg --all 'Det'<br />
git sctau listpkg --all '/G4.*U'<br />
<br />
<br />
Remove local package (the repository left intact)<br />
git sctau rmpkg <PackageName><br />
<br />
It is recommended to keep locally only the packages under active development.<br />
<br />
== Build and run ==<br />
To build:<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
To setup local environment (to use locally built packages instead of default versions):<br />
source x86_64-slc7-gcc9-opt/setup.sh<br />
<br />
To run<br />
cd ../run<br />
<br />
Run primary generators:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Run ull simulation:<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
== Commit changes ==<br />
In the 'aurora' directory:<br />
<br />
Add changed files<br />
git add <list of files><br />
<br />
remove unnecessary files<br />
git rm <list of files><br />
<br />
<br />
Commit changes in the local repository:<br />
git commit -m 'Meaningful message concerning the introduced changes'<br />
<br />
Each commit should contain a minimal set of logically interconnected changes.<br />
You may (and will) have many commits when developing a package.<br />
<br />
Put the changes to the server:<br />
git push<br />
<br />
Wheh doing push for the first time, it is necessary to do it like<br />
git push --set-upstream origin <TopicDevelopmentBranch><br />
<br />
<br />
After the push, <code>git</code> displays a link to be followed for a <code>Merge Request</code> (adding your contributions to common repository).<br />
<br />
When doing a merge request, make sure:<br />
* the branch is correct<br />
* changes are well described<br />
** reasons for the changes are demonstrated<br />
** the changes are described<br />
** influence on the other software and possible side effects are indicated<br />
** there are links to related issues, if any<br />
<br />
== Running graphical applications remotely ==<br />
If your connection does not allow you to run X applications directly, please try [[x2go]].<br />
<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2021-04-01T01:34:24Z<p>A.M.Suharev: /* Tune working environment */</p>
<hr />
<div>== Register and prepare account ==<br />
* Register at BINP/GCF cluster. Address to [[User:A.M.Suharev|Andrey Sukharev]] or [[User:D.A.Maksimov|Dmitry Maximov]]<br />
<br />
* Log in to <code>stark.inp.nsk.su</code> (BINP local access only) or <code>proxima.inp.nsk.su</code> (accessible from the Internet, pubkey authentication only)<br />
<br />
The login servers have similar configuration and share common <code>/home</code>.<br />
<br />
The <code>git</code> is accessible using <code>ssh</code> protocol with key authorization.<br />
<br />
* If necessary, create <code>ssh</code> key using<br />
ssh-keygen<br />
<br />
Agree to everything, passwordless keys are allowed.<br />
<br />
Then you'll have in your <code>~/.ssh/</code> two files <code>id_rsa</code> and <code>id_rsa.pub</code> - these are your private and public keys.<br />
Full paths to the files are displayed in the terminal.<br />
<br />
* Log in to <code>gitlab</code> server https://git.inp.nsk.su/ using the same name/password as for <code>stark</code><br />
* Register your public key for your account: i. e add ~/.ssh/id_rsa.pub contents to a from at the following link<br />
https://git.inp.nsk.su/profile/keys<br />
<br />
<br />
== Create working repository (fork) ==<br />
Open central repository<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
and make fork.<br />
<br />
<br />
== Tune working environment ==<br />
Each time you log in to a server <code>stark</code>/<code>proxima</code>, you need to<br />
setupSCTAU<br />
asetup ''The software version''<br />
<br />
Available version are at least:<br />
* 0.2.3 ``release'' - the environment before first official release<br />
asetup Aurora,0.2.3<br />
* 1.0.0 release - fixed build for detector investigation tasks, for physics and <br />
other task requiring stable environment. The 1.0.X series are intended for bug fixes.<br />
asetup Aurora,1.0.0<br />
* master - branch and build for future development without any warranty on <br />
stability, compatibility or meaningful working.<br />
asetup Aurora,master,latest<br />
<br />
Ask software coordinators if you're unsure what to put as ''the software version''.<br />
<br />
To tune git do once:<br />
<br />
First, do:<br />
git sctau init-config<br />
<br />
Check if the settings are correct. But the defaults should be fine.<br />
<br />
Then apply the settings:<br />
git sctau init-config --apply<br />
<br />
== tune working directory ==<br />
To develop new code or to modify existing one, do:<br />
<br />
create workare in your home directory (at <code>stark</code>/<code>proxima</code>)<br />
mkdir workarea<br />
cd workarea<br />
<br />
create directories for build and run:<br />
mkdir build run<br />
<br />
workarea preparation (just once):<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
<br />
Go to working directory<br />
cd aurora<br />
<br />
Fetch updates from head repository (must be done before '''creating a branch''' when the workarea exists for a long time)<br />
git fetch upstream<br />
<br />
Create a working branch. Give it a sensible name:<br />
git checkout -b <TopicDevelopmentBranch> upstream/<target_branch> --no-track<br />
<br />
<br />
'''The line above will not work if you simply copy-paste it. It is intentional. It is important to choose correct target branch. If unsure ask software coordinators.'''<br />
<br />
<br />
To modify an existing package check it out:<br />
git sctau addpkg GenExamples<br />
<br />
To create new package, one should create its whole directory structure, provide CMakeLists.txt and everything.<br />
<br />
''Before creating new package ask software coordinators how to name and where to place it''<br />
<br />
Useful commands to manage packages in you working area:<br />
<br />
List local packages<br />
git sctau listpkg<br />
<br />
List packages in the whole repository<br />
git sctau listpkg --all<br />
or with regexp filter<br />
git sctau listpkg --all 'Det'<br />
git sctau listpkg --all '/G4.*U'<br />
<br />
<br />
Remove local package (the repository left intact)<br />
git sctau rmpkg <PackageName><br />
<br />
It is recommended to keep locally only the packages under active development.<br />
<br />
== Build and run ==<br />
To build:<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
To setup local environment (to use locally built packages instead of default versions):<br />
source x86_64-slc7-gcc8-opt/setup.sh<br />
<br />
To run<br />
cd ../run<br />
<br />
Run primary generators:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Run ull simulation:<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
== Commit changes ==<br />
In the 'aurora' directory:<br />
<br />
Add changed files<br />
git add <list of files><br />
<br />
remove unnecessary files<br />
git rm <list of files><br />
<br />
<br />
Commit changes in the local repository:<br />
git commit -m 'Meaningful message concerning the introduced changes'<br />
<br />
Each commit should contain a minimal set of logically interconnected changes.<br />
You may (and will) have many commits when developing a package.<br />
<br />
Put the changes to the server:<br />
git push<br />
<br />
Wheh doing push for the first time, it is necessary to do it like<br />
git push --set-upstream origin <TopicDevelopmentBranch><br />
<br />
<br />
After the push, <code>git</code> displays a link to be followed for a <code>Merge Request</code> (adding your contributions to common repository).<br />
<br />
When doing a merge request, make sure:<br />
* the branch is correct<br />
* changes are well described<br />
** reasons for the changes are demonstrated<br />
** the changes are described<br />
** influence on the other software and possible side effects are indicated<br />
** there are links to related issues, if any<br />
<br />
== Running graphical applications remotely ==<br />
If your connection does not allow you to run X applications directly, please try [[x2go]].<br />
<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Software_publicationsSoftware publications2021-03-02T10:32:47Z<p>A.M.Suharev: </p>
<hr />
<div>* ''Software framework for the Super Charm-Tau factory detector project'', submitted to vCHEP2021 [ [[Media:vchep2021.pdf | pdf]] ]<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/File:Vchep2021.pdfFile:Vchep2021.pdf2021-03-02T10:30:53Z<p>A.M.Suharev: The "Software framework for the Super Charm-Tau factory detector project" article submmitted to vCHEP 2021.</p>
<hr />
<div>The "Software framework for the Super Charm-Tau factory detector project" article submmitted to vCHEP 2021.</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Software_publicationsSoftware publications2021-03-02T10:29:44Z<p>A.M.Suharev: Created page with "* ''Software framework for the Super Charm-Tau factory detector project'', submitted to vCHEP2021 [ pdf ] Category:Software"</p>
<hr />
<div>* ''Software framework for the Super Charm-Tau factory detector project'', submitted to vCHEP2021 [ [[File:vchep2021.pdf|pdf]] ]<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Internal_MainInternal Main2021-03-02T10:26:09Z<p>A.M.Suharev: /* Software and computing */</p>
<hr />
<div>Welcome to the internal wiki page of the Super charm-tau factory detector!<br />
<br />
[[CREMLINplus]]<br />
<br />
= General Information =<br />
* [[SCT Collaboration structure]]<br />
* [[SCT detector road map]]<br />
* [[SCT mailing lists]]<br />
* [[Upcoming events 2018]]<br />
* [[The SCT Detector Naming Challenge]]<br />
<br />
== Collaboration Meetings ==<br />
* [[SCT General meetings]]<br />
* [[SCT project office meetings]]<br />
* [[SCT working groups meetings]]<br />
<br />
= Hardware =<br />
* [[:Category:Hardware|All pages in the Hardware category]]<br />
<br />
== Subdetectors ==<br />
* [[Inner tracker]]<br />
* [[Drift chamber]]<br />
* [[PID]]<br />
* [[Magnet]]<br />
* [[Electromagnetic calorimeter]]<br />
* [[Muon system]]<br />
<br />
= Software and computing =<br />
* [[Software and simulation task list]]<br />
* [[Tutorials & how-to's]]<br />
* [[Software_publications|Publications]]<br />
* [[:Category:Software|All pages in the Software category]]<br />
<br />
== Software basis ==<br />
* [[Software git workflow]]<br />
* [[Workflow quick reference]]<br />
* [[C++ coding guidelines]]<br />
* [[How-to: implement subdetector model]]<br />
* [[Geometry validation scripts]]<br />
<br />
== Simulation ==<br />
* [[Event generators]]<br />
* [[SCT software releases]]<br />
* [[Event data model]]<br />
* [[Parametric simulation]]<br />
* [[Full simulation]]<br />
* [[Detector geometry description]]<br />
** [[Inner tracker geometry]]<br />
** [[DC geometry]]<br />
** [[FARICH geometry]]<br />
** [[Calorimeter geometry]]<br />
** [[Muon system geometry]]<br />
** [[Magnet geometry]]<br />
<br />
== Data analysis ==<br />
* [[How-to for parametric simulation data analysis]]<br />
* [[MC Data Sets|Available MC samples]]<br />
* [[Event selection framework]]<br />
<br />
= Physics Case =<br />
* [[Top-10 topics for feasibility studies]]<br />
* [[Inclusive particle momentum spectra]]<br />
* [[Tau physics with polarization]]<br />
** [[Tau EDM and g-2]]<br />
<br />
= Misc =<br />
* [[SCT_talks|List of Talks]]<br />
* [[Documents]]<br />
* [[Plots]]<br />
<br />
[[Category:Not_public]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/X2goX2go2020-11-05T11:28:11Z<p>A.M.Suharev: Created page with "= How to run remote graphical applications using x2go = [https://wiki.x2go.org/doku.php/start X2go] is generally a program to run X applications via slow network channels. Th..."</p>
<hr />
<div>= How to run remote graphical applications using x2go =<br />
<br />
[https://wiki.x2go.org/doku.php/start X2go] is generally a program to run X applications via slow network channels.<br />
There is straitforward installation instruction for various systems on the site.<br />
<br />
First you need to start x2goclient and set up new session there.<br />
For example, host could be proxima.inp.nsk.su, session type "Single application" with "Terminal" selected from drop-down menu at the right.<br />
<br />
Then each time you run x2goclient you may select the session, and it should open an x-terminal from the remote host.<br />
<br />
To run sctau software you need to<br />
export LIBGL_ALWAYS_INDIRECT=1<br />
in the terminal and then do all sctau setup stuff (setupSCTAU, asetup, source whatever_path/x86_64-slc7-gcc8-opt/setup.sh) listed in the [[Workflow quick reference]].<br />
Then you should be able to start, for instance, GeoDisplay from the terminal via x2go client.<br />
<br />
[[category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2020-11-05T11:20:06Z<p>A.M.Suharev: </p>
<hr />
<div>== Register and prepare account ==<br />
* Register at BINP/GCF cluster. Address to [[User:A.M.Suharev|Andrey Sukharev]] or [[User:D.A.Maksimov|Dmitry Maximov]]<br />
<br />
* Log in to <code>stark.inp.nsk.su</code> (BINP local access only) or <code>proxima.inp.nsk.su</code> (accessible from the Internet, pubkey authentication only)<br />
<br />
The login servers have similar configuration and share common <code>/home</code>.<br />
<br />
The <code>git</code> is accessible using <code>ssh</code> protocol with key authorization.<br />
<br />
* If necessary, create <code>ssh</code> key using<br />
ssh-keygen<br />
<br />
Agree to everything, passwordless keys are allowed.<br />
<br />
Then you'll have in your <code>~/.ssh/</code> two files <code>id_rsa</code> and <code>id_rsa.pub</code> - these are your private and public keys.<br />
Full paths to the files are displayed in the terminal.<br />
<br />
* Log in to <code>gitlab</code> server https://git.inp.nsk.su/ using the same name/password as for <code>stark</code><br />
* Register your public key for your account: i. e add ~/.ssh/id_rsa.pub contents to a from at the following link<br />
https://git.inp.nsk.su/profile/keys<br />
<br />
<br />
== Create working repository (fork) ==<br />
Open central repository<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
and make fork.<br />
<br />
<br />
== Tune working environment ==<br />
Each time you log in to a server <code>stark</code>/<code>proxima</code>, you need to<br />
setupSCTAU<br />
asetup SCTauSim,master,latest<br />
The latter one selects the build to use.<br />
<br />
''Ask software coordinators if you're unsure which build to use''<br />
<br />
To tune git do once:<br />
<br />
First, do:<br />
git sctau init-config<br />
<br />
Check if the settings are correct. But the defaults should be fine.<br />
<br />
Then apply the settings:<br />
git sctau init-config --apply<br />
<br />
<br />
== tune working directory ==<br />
To develop new code or to modify existing one, do:<br />
<br />
create workare in your home directory (at <code>stark</code>/<code>proxima</code>)<br />
mkdir workarea<br />
cd workarea<br />
<br />
create directories for build and run:<br />
mkdir build run<br />
<br />
workarea preparation (just once):<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
<br />
Go to working directory<br />
cd aurora<br />
<br />
Fetch updates from head repository (must be done before '''creating a branch''' when the workarea exists for a long time)<br />
git fetch upstream<br />
<br />
Create a working branch. Give it a sensible name:<br />
git checkout -b <TopicDevelopmentBranch> upstream/<target_branch> --no-track<br />
<br />
<br />
'''The line above will not work if you simply copy-paste it. It is intentional. It is important to choose correct target branch. If unsure ask software coordinators.'''<br />
<br />
<br />
To modify an existing package check it out:<br />
git sctau addpkg GenExamples<br />
<br />
To create new package, one should create its whole directory structure, provide CMakeLists.txt and everything.<br />
<br />
''Before creating new package ask software coordinators how to name and where to place it''<br />
<br />
Useful commands to manage packages in you working area:<br />
<br />
List local packages<br />
git sctau listpkg<br />
<br />
List packages in the whole repository<br />
git sctau listpkg --all<br />
or with regexp filter<br />
git sctau listpkg --all 'Det'<br />
git sctau listpkg --all '/G4.*U'<br />
<br />
<br />
Remove local package (the repository left intact)<br />
git sctau rmpkg <PackageName><br />
<br />
It is recommended to keep locally only the packages under active development.<br />
<br />
== Build and run ==<br />
To build:<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
To setup local environment (to use locally built packages instead of default versions):<br />
source x86_64-slc7-gcc8-opt/setup.sh<br />
<br />
To run<br />
cd ../run<br />
<br />
Run primary generators:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Run ull simulation:<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
== Commit changes ==<br />
In the 'aurora' directory:<br />
<br />
Add changed files<br />
git add <list of files><br />
<br />
remove unnecessary files<br />
git rm <list of files><br />
<br />
<br />
Commit changes in the local repository:<br />
git commit -m 'Meaningful message concerning the introduced changes'<br />
<br />
Each commit should contain a minimal set of logically interconnected changes.<br />
You may (and will) have many commits when developing a package.<br />
<br />
Put the changes to the server:<br />
git push<br />
<br />
Wheh doing push for the first time, it is necessary to do it like<br />
git push --set-upstream origin <TopicDevelopmentBranch><br />
<br />
<br />
After the push, <code>git</code> displays a link to be followed for a <code>Merge Request</code> (adding your contributions to common repository).<br />
<br />
When doing a merge request, make sure:<br />
* the branch is correct<br />
* changes are well described<br />
** reasons for the changes are demonstrated<br />
** the changes are described<br />
** influence on the other software and possible side effects are indicated<br />
** there are links to related issues, if any<br />
<br />
== Running graphical applications remotely ==<br />
If your connection does not allow you to run X applications directly, please try [[x2go]].<br />
<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Detector_and_event_visualizationDetector and event visualization2020-05-18T04:27:05Z<p>A.M.Suharev: </p>
<hr />
<div>== Detector Geometry Visualization ==<br />
<br />
== Event Visualization ==<br />
<br />
[[Category:Software]]<br />
[[Category:Not_public]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Detector_and_event_visualizationDetector and event visualization2020-05-18T04:25:47Z<p>A.M.Suharev: Created page with "== Detector Geometry Visualization == == Event Visualization == Category:Software"</p>
<hr />
<div>== Detector Geometry Visualization ==<br />
<br />
== Event Visualization ==<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Using_batch_system_at_BINP/GCFUsing batch system at BINP/GCF2019-12-02T14:17:10Z<p>A.M.Suharev: </p>
<hr />
<div>== Batch system usage simplified ==<br />
<br />
The BINP/GCF batch system is Sun Grid Engine.<br />
<br />
To submit a job, do:<br />
<br />
# After login set up your working environment (setupSCTAU, asetup, source build/x86..)<br />
# Change to the job working directory<br />
# Submit the job using<br />
<br />
qsub -cwd -V -b y -shell n progam_name program_parameters<br />
<br />
For instance,<br />
<br />
qsub -cwd -V -b y -shell n ctaurun.py test_calor_clusters.py<br />
<br />
Where <br />
* <tt>qsub</tt> - submit a job<br />
* <tt>-cwd</tt> - work in the current directory, also put job' stdout and stderr there (the files will be program_name.oNUMBER and program_name.eNUBMER)<br />
* <tt>-V</tt> - transmit environment variables (they were set up at the step 1) into the job<br />
* <tt>-b y -shell n</tt> - run program_name as an executable file, do not spawn an extra shell<br />
<br />
To check job status, issue<br />
qstat<br />
<br />
For details please refer to <tt>man qsub</tt>, <tt>man qstat</tt>.<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Using_batch_system_at_BINP/GCFUsing batch system at BINP/GCF2019-10-15T12:58:48Z<p>A.M.Suharev: Created page with "== Batch system usage simplified == The BINP/GCF batch system is Sun Grid Engine. To submit a job, do: # After login set up your working environment (setupSCTAU, asetup, so..."</p>
<hr />
<div>== Batch system usage simplified ==<br />
<br />
The BINP/GCF batch system is Sun Grid Engine.<br />
<br />
To submit a job, do:<br />
<br />
# After login set up your working environment (setupSCTAU, asetup, source build/x86..)<br />
# Change to the job working directory<br />
# Submit the job using<br />
<br />
qsub -cwd -V -b y -shell n progam_name program_parameters<br />
<br />
For instance,<br />
<br />
qsub -cwd -V -b y -shell n ctaurun.py test_calor_clusters.py<br />
<br />
Where <br />
* <tt>qsub</tt> - submit a job<br />
* <tt>-cwd</tt> - work in the current directory, also put job' stdout and stderr there (the files will be program_name.oNUMBER and program_name.eNUBMER)<br />
* <tt>-V</tt> - transmit environment variables (they were set up at the step 1) into the job<br />
* <tt>-b y -shell n</tt> - run program_name as an executable file, do not spawn an extra shell<br />
<br />
To check job status, issue<br />
qstat<br />
<br />
For details please refer to <tt>man qsub</tt>, <tt>man qstat</tt>.</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2019-10-04T10:30:19Z<p>A.M.Suharev: </p>
<hr />
<div>== Register and prepare account ==<br />
* Register at BINP/GCF cluster. Address to [[User:A.M.Suharev|Andrey Sukharev]] or [[User:D.A.Maksimov|Dmitry Maximov]]<br />
<br />
* Log in to <code>stark.inp.nsk.su</code> (BINP local access only) or <code>proxima.inp.nsk.su</code> (accessible from the Internet, pubkey authentication only)<br />
<br />
The login servers have similar configuration and share common <code>/home</code>.<br />
<br />
The <code>git</code> is accessible using <code>ssh</code> protocol with key authorization.<br />
<br />
* If necessary, create <code>ssh</code> key using<br />
ssh-keygen<br />
<br />
Agree to everything, passwordless keys are allowed.<br />
<br />
Then you'll have in your <code>~/.ssh/</code> two files <code>id_rsa</code> and <code>id_rsa.pub</code> - these are your private and public keys.<br />
Full paths to the files are displayed in the terminal.<br />
<br />
* Log in to <code>gitlab</code> server https://git.inp.nsk.su/ using the same name/password as for <code>stark</code><br />
* Register your public key for your account: i. e add ~/.ssh/id_rsa.pub contents to a from at the following link<br />
https://git.inp.nsk.su/profile/keys<br />
<br />
<br />
== Create working repository (fork) ==<br />
Open central repository<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
and make fork.<br />
<br />
<br />
== Tune working environment ==<br />
Each time you log in to a server <code>stark</code>/<code>proxima</code>, you need to<br />
setupSCTAU<br />
asetup SCTauSim,master,latest<br />
The latter one selects the build to use.<br />
<br />
''Ask software coordinators if you're unsure which build to use''<br />
<br />
To tune git do once:<br />
<br />
First, do:<br />
git sctau init-config<br />
<br />
Check if the settings are correct. But the defaults should be fine.<br />
<br />
Then apply the settings:<br />
git sctau init-config --apply<br />
<br />
<br />
== tune working directory ==<br />
To develop new code or to modify existing one, do:<br />
<br />
create workare in your home directory (at <code>stark</code>/<code>proxima</code>)<br />
mkdir workarea<br />
cd workarea<br />
<br />
create directories for build and run:<br />
mkdir build run<br />
<br />
workarea preparation (just once):<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
<br />
Go to working directory<br />
cd aurora<br />
<br />
Fetch updates from head repository (must be done before '''creating a branch''' when the workarea exists for a long time)<br />
git fetch upstream<br />
<br />
Create a working branch. Give it a sensible name:<br />
git checkout -b <TopicDevelopmentBranch> upstream/<target_branch> --no-track<br />
<br />
<br />
'''The line above will not work if you simply copy-paste it. It is intentional. It is important to choose correct target branch. If unsure ask software coordinators.'''<br />
<br />
<br />
To modify an existing package check it out:<br />
git sctau addpkg GenExamples<br />
<br />
To create new package, one should create its whole directory structure, provide CMakeLists.txt and everything.<br />
<br />
''Before creating new package ask software coordinators how to name and where to place it''<br />
<br />
Useful commands to manage packages in you working area:<br />
<br />
List local packages<br />
git sctau listpkg<br />
<br />
List packages in the whole repository<br />
git sctau listpkg --all<br />
or with regexp filter<br />
git sctau listpkg --all 'Det'<br />
git sctau listpkg --all '/G4.*U'<br />
<br />
<br />
Remove local package (the repository left intact)<br />
git sctau rmpkg <PackageName><br />
<br />
It is recommended to keep locally only the packages under active development.<br />
<br />
== Build and run ==<br />
To build:<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
To setup local environment (to use locally built packages instead of default versions):<br />
source x86_64-slc7-gcc7-opt/setup.sh<br />
<br />
To run<br />
cd ../run<br />
<br />
Run primary generators:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Run ull simulation:<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
== Commit changes ==<br />
In the 'aurora' directory:<br />
<br />
Add changed files<br />
git add <list of files><br />
<br />
remove unnecessary files<br />
git rm <list of files><br />
<br />
<br />
Commit changes in the local repository:<br />
git commit -m 'Meaningful message concerning the introduced changes'<br />
<br />
Each commit should contain a minimal set of logically interconnected changes.<br />
You may (and will) have many commits when developing a package.<br />
<br />
Put the changes to the server:<br />
git push<br />
<br />
Wheh doing push for the first time, it is necessary to do it like<br />
git push --set-upstream origin <TopicDevelopmentBranch><br />
<br />
<br />
After the push, <code>git</code> displays a link to be followed for a <code>Merge Request</code> (adding your contributions to common repository).<br />
<br />
When doing a merge request, make sure:<br />
* the branch is correct<br />
* changes are well described<br />
** reasons for the changes are demonstrated<br />
** the changes are described<br />
** influence on the other software and possible side effects are indicated<br />
** there are links to related issues, if any<br />
<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/How-to:_implement_subdetector_modelHow-to: implement subdetector model2018-10-30T07:46:54Z<p>A.M.Suharev: /* Detector Description */</p>
<hr />
<div>= Set up the AURORA work area =<br />
== Directories structure ==<br />
mkdir workarea<br />
cd workarea<br />
mkdir build run<br />
<br />
== Setup AURORA environment ==<br />
The environment setup should be done every time:<br />
setupSCTAU<br />
asetup SCTauSim,master,latest<br />
<br />
Aurora is available now<br />
ctaurun <script><br />
<br />
Make next steps described in [[Workflow quick reference]]<br />
<br />
= Detector Description =<br />
cd aurora<br />
mkdir DetectorDescription<br />
cd DetectorDescription<br />
mkdir MyBrandNewSubsystem<br />
cd MyBrandNewSubsystem<br />
mkdir xml src python jobOptions<br />
<br />
Create file <CMakeLists.txt> containing something like this<br />
sctau_subdir(MyBrandNewSubsystem)<br />
sctau_depends_on_subdirs(PUBLIC External/DD4hep External/Geant4 External/ROOT)<br />
sctau_add_dd4hep_component(MyBrandNewSubsystem<br />
src/*.cpp<br />
NO_PUBLIC_HEADERS<br />
LINK_LIBRARIES GaudiKernel DD4hep ROOT Geant4)<br />
sctau_install_joboptions(jobOptions/*)<br />
sctau_install_python_modules(python/*)<br />
sctau_install_xmls(xml/*)<br />
<br />
<br />
[[Category:Not_public]]<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2018-10-15T08:28:54Z<p>A.M.Suharev: форматирование, уточнение</p>
<hr />
<div>0. Зарегистрироваться на BINP/GCF кластере<br />
<br />
зайти на stark или proxima<br />
Если ещё нет создать ssh ключ.<br />
<br />
зайти в gitlab https://git.inp.nsk.su/ <br />
зарегистрировать этот ключ в своём аккаунте<br />
<br />
этот шаг у вас должен быть уже пройден.<br />
<br />
1. Открыть центральный репозиторий<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
и сделать форк к себе.<br />
<br />
2. Настройка рабочей среды.<br />
<br />
создадим рабочую директорию<br />
mkdir workarea<br />
cd workarea<br />
<br />
Директории для сборки и запуска<br />
mkdir build run<br />
<br />
<br />
Настройка самой базовой среды,<br />
данную команду необходимо выполнять каждый раз при входе<br />
setupSCTAU<br />
<br />
Выберем релиз и его версию, в которой будем работать.<br />
Для работы, требующей стабильности окружения, например, физического анализа, <br />
нужно использовать этот вариант<br />
asetup SСTauSim,0.1.0<br />
<br />
Для работ по разработке программных компонент этот<br />
asetup SCTauSim,master,latest<br />
<br />
Для простого запуска готовых примеров этих шагов достаточно. Далее:<br />
<br />
cd run<br />
<br />
Запуск первичных генераторов моделирования:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Запуск полного моделирования:<br />
<br />
Перед запуском полного моделирования в текущей директории надо положить файл<br />
taumugamma.root, с входными данными - частицы из первичного генератора <br />
<br />
Этот файл можно взять, например, у Виталия: /home/vvorob/public/tuples/fccedm/taumugamma.root<br />
<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
<br />
3. Для разработки нового или модификации существующего кода нужны следующие <br />
действия:<br />
<br />
Возвращаемся в workarea.<br />
<br />
Подготовка рабочей директории (делается один раз):<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
cd aurora<br />
<br />
Получение обновлений с головного репозитория нужно делать периодически при длительном существовании рабочей директории и <br />
существенных изменениях в головном<br />
git fetch upstream<br />
<br />
Подготовка рабочей тематической ветки, эта ветка будет видна другим людям, поэтому название стоит выбирать говорящим и осмысленным<br />
git checkout -b MyDevelopmentBranch upstream/0.1 --no-track<br />
<br />
Если хотим модифицировать существующий пакет, то<br />
<br />
Добавим пакеты из репозитория<br />
git sctau addpkg GenExamples<br />
и/или<br />
git sctau addpkg G4SimExamples<br />
<br />
Если создаётся новый пакет, то надо создать всю структуру директорий, где он должен лежать, написать CMakeLists.txt и всё остальное, что нужно для нового <br />
пакета.<br />
<br />
Сборка<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
Настройка локального окружения<br />
<br />
эта строчка принципиально важна, чтобы использовались локально собранные пакеты вместо тех, что в релизе:<br />
source x86_64-slc7-gcc7-opt/setup.sh<br />
<br />
Запуск<br />
cd ../run<br />
ctaurun GenExamples/evtgen.py<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
[[Category:Not_public]][[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Workflow_quick_referenceWorkflow quick reference2018-10-15T06:16:41Z<p>A.M.Suharev: форматирование, уточнение</p>
<hr />
<div>0. Зарегистрироваться на BINP/GCF кластере<br />
<br />
зайти на stark или proxima<br />
Если ещё нет создать ssh ключ.<br />
<br />
зайти в gitlab https://git.inp.nsk.su/ <br />
зарегистрировать этот ключ в своём аккаунте<br />
<br />
этот шаг у вас должен быть уже пройден.<br />
<br />
1. Открыть центральный репозиторий<br />
https://git.inp.nsk.su/sctau/aurora<br />
<br />
и сделать форк к себе.<br />
<br />
2. Настройка рабочей среды.<br />
<br />
создадим рабочую директорию<br />
mkdir workarea<br />
cd workarea<br />
<br />
Директории для сборки и запуска<br />
mkdir build run<br />
<br />
<br />
Настройка самой базовой среды,<br />
данную команду необходимо выполнять каждый раз при входе<br />
setupSCTAU<br />
<br />
Выберем релиз и его версию, в которой будем работать.<br />
Для работы, требующей стабильности окружения, например, физического анализа, <br />
нужно использовать этот вариант<br />
asetup SСTauSim,0.1.0<br />
<br />
Для работ по разработке программных компонент этот<br />
asetup SCTauSim,master,latest<br />
<br />
Для простого запуска готовых примеров этих шагов достаточно. Далее:<br />
<br />
cd run<br />
<br />
Запуск первичных генераторов моделирования:<br />
ctaurun GenExamples/evtgen.py <br />
<br />
Запуск полного моделирования:<br />
<br />
Перед запуском полного моделирования в текущей директории надо положить файл<br />
taumugamma.root, с входными данными - частицы из первичного генератора <br />
<br />
Этот файл можно взять, например, у Виталия: /home/vvorob/public/tuples/fccedm/taumugamma.root<br />
<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
<br />
3. Для разработки нового или модификации существующего кода нужны следующие <br />
действия<br />
возвращаемся в workarea<br />
# Подготовка рабочей директории (делается один раз)<br />
git sctau init-workdir ssh://git@git.inp.nsk.su/sctau/aurora.git<br />
cd aurora<br />
<br />
# Получение обновлений с головного репозитория<br />
# нужно делать периодически при длительном существовании рабочей директории и <br />
# существенных изменениях в головном<br />
git fetch upstream<br />
<br />
# Подготовка рабочей тематической ветки, эта ветка будет видна другим людям,<br />
# поэтому название стоит выбирать говорящим и осмысленным<br />
git checkout -b MyDevelopmentBranch upstream/0.1 --no-track<br />
<br />
# Если хотим модифицировать существующий пакет, то<br />
# Добавим пакеты из репозитория<br />
git sctau addpkg GenExamples<br />
и/или<br />
git sctau addpkg G4SimExamples<br />
<br />
# Если надо создать новый пакет, то надо создать всю структуру директорий где <br />
он должен лежать, написать CMakeLists.txt и всё остальное что нужно для нового <br />
пакета.<br />
<br />
# Сборка<br />
cd ../build/<br />
cmake ../aurora/Projects/WorkDir<br />
make<br />
<br />
# Настройка локального окружения<br />
# эта строчка принципиально важна, чтобы использовались локально собранные<br />
# пакеты вместо тех, что в релизе<br />
source x86_64-slc7-gcc7-opt/setup.sh<br />
<br />
# Запуск<br />
cd ../run<br />
ctaurun GenExamples/evtgen.py<br />
ctaurun G4SimExamples/fullsim_example.py<br />
<br />
[[Category:Not_public]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/X_server_settingsX server settings2018-09-10T06:35:00Z<p>A.M.Suharev: </p>
<hr />
<div>To allow GL applications run from a remote server via X protocol, one might need to add the following to the local X server configuration:<br />
<br />
Section "ServerFlags" <br />
Option "IndirectGLX" "on" <br />
EndSection <br />
<br />
On the modern Linux systems the appropriate place for these lines would be some file in the /etc/X11/xorg.conf.d/ directory, for instance, /etc/X11/xorg.conf.d/glxsettings.conf.<br />
<br />
The receipt seems to help sometimes for proprietary NVidia drivers.<br />
<br />
<br />
Another workaround is to use x2go. Run a terminal at remote server using x2goclient. Then, set a variable:<br />
<br />
export LIBGL_ALWAYS_INDIRECT=1<br />
<br />
Run your GL application from the terminal.<br />
<br />
[[Category:Known issues]]<br />
[[Category:Sacred knowledge]]<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/X_server_settingsX server settings2018-09-10T06:34:43Z<p>A.M.Suharev: </p>
<hr />
<div>To allow GL applications run from a remote server via X protocol, one might need to add the following to the local X server configuration:<br />
<br />
Section "ServerFlags" <br />
Option "IndirectGLX" "on" <br />
EndSection <br />
<br />
On the modern Linux systems the appropriate place for these lines would be some file in the /etc/X11/xorg.conf.d/ directory, for instance, /etc/X11/xorg.conf.d/glxsettings.conf.<br />
<br />
The receipt seems to help sometimes for proprietary NVidia drivers.<br />
<br />
<br />
Another workaround is to use x2go. Run a terminal at remote server using x2goclient. Then, set a variable:<br />
<br />
export LIBGL_ALWAYS_INDIRECT=1<br />
<br />
Run your GL application from the terminal.<br />
<br />
[[Category:Known issues]]<br />
[[Category:Sacred knowledge]]<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/X_server_settingsX server settings2018-03-30T11:09:18Z<p>A.M.Suharev: Created page with "To allow GL applications run from a remote server via X protocol, one might need to add the following to the local X server configuration: Section "ServerFlags" Optio..."</p>
<hr />
<div>To allow GL applications run from a remote server via X protocol, one might need to add the following to the local X server configuration:<br />
<br />
Section "ServerFlags" <br />
Option "IndirectGLX" "on" <br />
EndSection <br />
<br />
On the modern Linux systems the appropriate place for these lines would be some file in the /etc/X11/xorg.conf.d/ directory, for instance, /etc/X11/xorg.conf.d/glxsettings.conf.<br />
<br />
The receipt seems to help for proprietary NVidia drivers.<br />
<br />
<br />
[[Category:Known issues]]<br />
[[Category:Sacred knowledge]]<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/Category:SoftwareCategory:Software2017-11-27T10:45:13Z<p>A.M.Suharev: Created page with "Software-related topics"</p>
<hr />
<div>Software-related topics</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/%D0%92%D0%BD%D0%B5%D1%88%D0%BD%D0%B5%D0%B5_%D0%9F%D0%9EВнешнее ПО2017-11-27T10:44:41Z<p>A.M.Suharev: </p>
<hr />
<div>Для установки внешнего ПО создан каталог /ceph/groups/sctau/software/external<br />
<br />
Установлены компиляторы gcc 6.4.0 и gcc 7.2.0.<br />
<br />
Для активации конкретной версии можно использовать скрипт<br />
/ceph/groups/sctau/software/scripts/setup.sh с параметрами gcc6 или gcc7, например:<br />
<br />
source /ceph/groups/sctau/software/scripts/setup.sh gcc7<br />
<br />
[[Category:Software]]</div>A.M.Suharevhttps://ctd.inp.nsk.su/wiki/index.php/%D0%92%D0%BD%D0%B5%D1%88%D0%BD%D0%B5%D0%B5_%D0%9F%D0%9EВнешнее ПО2017-11-27T10:42:56Z<p>A.M.Suharev: Created page with "Для установки внешнего ПО создан каталог /ceph/groups/sctau/software/external Установлены компиляторы gcc 6.4.0 и gc..."</p>
<hr />
<div>Для установки внешнего ПО создан каталог /ceph/groups/sctau/software/external<br />
<br />
Установлены компиляторы gcc 6.4.0 и gcc 7.2.0.<br />
<br />
Для активации конкретной версии можно использовать скрипт<br />
/ceph/groups/sctau/software/scripts/setup.sh с параметрами gcc6 или gcc7, например:<br />
<br />
source /ceph/groups/sctau/software/scripts/setup.sh gcc7<br />
<br />
[[Категория:software]]</div>A.M.Suharev