Difference between revisions of "Performance"

From PresenceWiki
Jump to: navigation, search
Line 10: Line 10:
 
Don't schedule all tasks to run at the same time as they will collectively hog system resources.
 
Don't schedule all tasks to run at the same time as they will collectively hog system resources.
  
http://www.international-presence.com/images/bestuse/530task.png
+
[[file:530task.png]]
  
http://www.international-presence.com/images/bestuse/taskqueue.png
+
[[file:taskqueue.png]]
  
 
For lots of scheduled tasks that do need to run around the same time put these into a single task remembering to use an error handler .
 
For lots of scheduled tasks that do need to run around the same time put these into a single task remembering to use an error handler .
Line 20: Line 20:
 
When using Microsoft SQL Server, you will need to create a custom query to facilitate connection pooling.
 
When using Microsoft SQL Server, you will need to create a custom query to facilitate connection pooling.
  
http://www.international-presence.com/images/bestuse/customquery.png
+
[[file:customquery.png]]
  
 
This is just a small query on a small table to let Presence know whether the connection for that resource still exists or whether it needs to create a new one.
 
This is just a small query on a small table to let Presence know whether the connection for that resource still exists or whether it needs to create a new one.
Line 30: Line 30:
 
This will help you to identify and locate bottlenecks.
 
This will help you to identify and locate bottlenecks.
  
http://www.international-presence.com/images/bestuse/stats.png
+
[[file:stats.png]]
  
If you just want to read a file or contents of a simple url use the read text file http://www.international-presence.com/images/nodes/readfile.png instead of the Object Monitor http://www.international-presence.com/images/nodes/objmon.png.
+
If you just want to read a file or contents of a simple url use the read text file http://www.international-presence.com/images/nodes/readfile.png instead of the Object Monitor [[file:objmon.png]].
  
SQL http://www.international-presence.com/images/nodes/sql.png as opposed to append columns http://www.international-presence.com/images/nodes/appendcol.png.
+
SQL [[file:sql.png]] as opposed to append columns [[file:appendcol.png]].
  
 
So the select would be:-
 
So the select would be:-
Line 47: Line 47:
 
You should also think about how you iterate over the data set.
 
You should also think about how you iterate over the data set.
  
Let's say you're building up a string based on a dataset and split the data up for each row http://www.international-presence.com/images/nodes/split.png , setting a variable http://www.international-presence.com/images/nodes/setvariable.png for the current row.
+
Let's say you're building up a string based on a dataset and split the data up for each row [[file:split.png]], setting a variable [[file:setvariable.png]] for the current row.
  
It's better to append each row variable to a file http://www.international-presence.com/images/nodes/writefile.png in the split, rather than add the variable row to a task variable and then write that variable to a file at the end.
+
It's better to append each row variable to a file [[file:writefile.png]] in the split, rather than add the variable row to a task variable and then write that variable to a file at the end.
  
 
That is to say that the following task will experience an exponential increase in the time it takes to run as the data table gets bigger.
 
That is to say that the following task will experience an exponential increase in the time it takes to run as the data table gets bigger.
Line 55: Line 55:
 
This is due to the ${totalreport} variable getting bigger each time.
 
This is due to the ${totalreport} variable getting bigger each time.
  
http://www.international-presence.com/images/bestuse/appendtovartask.png
+
[[file:appendtovartask.png]]
  
 
It is the ${totalreport} variable that is slowing the task down, exponentially so as it gets bigger and bigger.
 
It is the ${totalreport} variable that is slowing the task down, exponentially so as it gets bigger and bigger.
Line 61: Line 61:
 
This is because each time the variable is written to, it is actually creating a new one (of a larger and larger size each time).
 
This is because each time the variable is written to, it is actually creating a new one (of a larger and larger size each time).
  
http://www.international-presence.com/images/bestuse/settotalreport.png
+
[[file:settotalreport.png]]
  
 
Whereas the following task, does the same thing, but much more efficiently as it appends the row variable directly to the file.
 
Whereas the following task, does the same thing, but much more efficiently as it appends the row variable directly to the file.
  
http://www.international-presence.com/images/bestuse/appendtofiletask.png
+
[[file:appendtofiletask.png]]
  
 
Because of this it has no need to append the ${row} variable to the ${totalreport} variable.
 
Because of this it has no need to append the ${row} variable to the ${totalreport} variable.

Revision as of 09:29, 20 August 2015

This page is part of the Best Use guide.



The timing of tasks is very important. If a queued task has no schedule in it then it will run repeatedly around every minute, 24 hours a day.

Ask yourself how often a task really needs to run.

Don't schedule all tasks to run at the same time as they will collectively hog system resources.

530task.png

Taskqueue.png

For lots of scheduled tasks that do need to run around the same time put these into a single task remembering to use an error handler .

For each task clear the data tables and variables, this will save system thread and memory resources.

When using Microsoft SQL Server, you will need to create a custom query to facilitate connection pooling.

Customquery.png

This is just a small query on a small table to let Presence know whether the connection for that resource still exists or whether it needs to create a new one.

If there is a certain task that you need to optimise then you can enable Tasks Statistics in the task properties.

Once the task has ran a few times, you can view the statistics for this task from Task Statisitics from the Task Menu.

This will help you to identify and locate bottlenecks.

Stats.png

If you just want to read a file or contents of a simple url use the read text file http://www.international-presence.com/images/nodes/readfile.png instead of the Object Monitor Objmon.png.

SQL Sql.png as opposed to append columns Appendcol.png.

So the select would be:-

   'Select  FIRSTNAME || SURNAME as FULLNAME from app.tasks range=1-10'

As seen in this statement it is possible to limit the data returned with the range keyword.

What Presence does in this instance is internally produce the full dataset and then chop it down to the range you've specified.

Because of this it would be actually quicker to limit the data in the first place by modifying the SQL , than to use the range feature. You should also think about how you iterate over the data set.

Let's say you're building up a string based on a dataset and split the data up for each row Split.png, setting a variable Setvariable.png for the current row.

It's better to append each row variable to a file Writefile.png in the split, rather than add the variable row to a task variable and then write that variable to a file at the end.

That is to say that the following task will experience an exponential increase in the time it takes to run as the data table gets bigger.

This is due to the ${totalreport} variable getting bigger each time.

Appendtovartask.png

It is the ${totalreport} variable that is slowing the task down, exponentially so as it gets bigger and bigger.

This is because each time the variable is written to, it is actually creating a new one (of a larger and larger size each time).

Settotalreport.png

Whereas the following task, does the same thing, but much more efficiently as it appends the row variable directly to the file.

Appendtofiletask.png

Because of this it has no need to append the ${row} variable to the ${totalreport} variable.