Training – Placement #skillexam #(skill #assessment #tool), #offshore #development #services, #recruitment #process #outsourcing #service, #traning


#

Dear Prospective Training Candidates,

CBS Information Systems, Inc. is fast growing software development and training company offering mission critical solutions to businesses through cutting-edge technologies since year 2000.

We are in the process of accepting candidates for various training programs. The training can be taken in class room style or remote (online) with live instructor.

If you are qualified, available, interested, planning to make a change, please RESPOND IMMEDIATELY. In considering candidates, time is of the essence, so please respond ASAP.

Here are our offerings:

  • Certified Trainer with real time experience.
  • Unlimited Lab Access during non-training hours.
  • Aggressive Placement Assistance.CBS will assist in job placement.
  • Fee Reimbursement upon successful placement by CBS.
  • No contract.
  • Real time Project Exercises and Step by Step Procedures with handouts.
  • Assistance in Resume and Interview Preparation.
  • Training from hands on consultancy experience.
  • Talk with experienced real time consultant available for comments and guidance.
  • Open for candidates with any visa status.
  • We provide the best possible trainer for each course
  • We offer most competitive pricing for our training programs.
  • We offer 100% online training with live instructors
  • You get trained from comfort of your place
  • We use state of the art learning management system
  • You are also connected via phone with live instructor during the class in a conference fashion

Sql server for loop #sql #server #for #loop, #pl/sql #for #loop


#

PL/SQL FOR Loop tips

Oracle Tips by Burleson

The PL/SQL FOR Loop

The FOR loop executes for a specified number of times, defined in the loop definition. Because the number of loops is specified, the overhead of checking a condition to exit is eliminated. The number of executions is defined in the loop definition as a range from a start value to an end value (inclusive). The integer index in the FOR loop starts at the start value and increments by one (1) for each loop until it reaches the end value.

SQL begin
2 for idx in 2..5 loop
3 dbms_output.put_line (idx);
4 end loop;
5 end;
6 /
2
3
4
5

PL/SQL procedure successfully completed.

In the example below a variable idx is defined, assigning it the value 100. When the FOR loop executes, the variable idx is also defined as the index for the FOR loop. The original variable idx goes out of scope when the FOR loop defines its index variable. Inside the FOR loop, the idx variable is the loop index. Once the FOR loop terminates, the loop index goes out of scope and the original idx variable is again in scope.

SQL declare
2 idx number := 100;
3 begin
4 dbms_output.put_line (idx);
5 for idx in 2..5 loop
6 dbms_output.put_line (idx);
7 end loop;
8 dbms_output.put_line (idx);
9 end;
10 /
100
2
3
4
5
100

PL/SQL procedure successfully completed.

You can use the loop index inside the loop, but you can not change it. If you want to loop by an increment other than one, you will have to do so programmatically as the FOR loop will only increment the index by one.

SQL begin
2 for i in 4. 200 loop
3 i := i + 4;
4 end loop;
5 end;
6 /
i := i + 4;
*
ERROR at line 3:
ORA-06550: line 3, column 5:
PLS-00363: expression I cannot be used as an assignment target
ORA-06550: line 3, column 5:
PL/SQL: Statement ignored

The loop index start and stop values can be expressions or variables. They are evaluated once at the start of the loop to determine the number of loop iterations. If their values change during the loop processing, it does not impact the number of iterations.

SQL declare
2 n_start number := 3;
3 n_stop number := 6;
4 begin
5 for xyz in n_start. n_stop loop
6 n_stop := 100;
7 dbms_output.put_line (xyz);
8 end loop;
9 end;
10 /
3
4
5
6

PL/SQL procedure successfully completed.

Line 6 changes the stop value, setting it to 100. But the loop still terminates at the value of 6. The loop index start and stop values are always defined from lowest to highest. If you want the index to count down use the REVERSE key word.

SQL begin
2 for num in 4. 7 loop
3 dbms_output.put_line (num);
4 end loop;
5
6 for num in reverse 4. 7 loop
7 dbms_output.put_line (num);
8 end loop;
9
10 for num in 7. 4 loop
11 dbms_output.put_line (num);
12 end loop;
13 end;
14 /
4
5
6
7
7
6
5
4

PL/SQL procedure successfully completed.

Notice that the third FOR loop COMPILED BUT DID NOT EXECUTE! The FOR loop calculated the number of loop iterations and got a negative number, therefore the loop count was zero.

In the next example a FOR loop is used to calculate the factorial of a number. A factorial value is commonly used to determine all possible values for a number and is defined as x*(x-1)*(x-2)?.(0) = !x.

8 = 8*7*6*5*4*3*2*1 = 40320

SQL declare
2 v_seed number := numb;
3 v_hold number := 1;
4 begin
5 for i in reverse 1. v_seed loop
6 v_hold := v_hold * i;
7 end loop;
8 dbms_output.put_line ( ! ||v_seed|| = ||v_hold);
9 end;
10 /

Enter value for numb: 8
!8 = 40320

SQL /
Enter value for numb: 4
!4 = 24

Related PL/SQL FOR Loop Articles:

The FOR loop runs one or more executable statements placed with in its loop structure while the loop index value is between the lower bound and the upper bound.

The below prototype defines the basic structure of the FOR loop.

For loop_index in [Reverse] lower_bound . upper_bound loop

End loop loop_index ;

% Note: Reverse keyword is optional in the FOR loop’s structure. The loop_index placed after the END LOOP syntax is optional and it is meant for identifying a particular loop’s end with ease.

The lower_bound value cannot be greater than the upper_bound value, else the loop will not run even once.

The upper_bound value cannot be lower the lower_bound value, else the loop will not run even once.

% Note: If the lower_bound and the upper_bound values are equal, the FOR loop executes only once irrespective of its bound value.

While the reverse keyword is placed, the loop_index value starts at the lower_bound and increments itself by 1 for each iteration of the loop until it reaches the upper_bound .

While the reverse keyword is not placed, the loop_index value starts at the upper_bound and decrements itself by 1 for each iteration of the loop until it reaches the lower_bound .

PL/SQL FOR loop tips

The below script runs the loop for 5 times starting from the lower bound value 1 incrementing itself by 1 until it reaches the upper bound value 5.

2. FOR loop_index IN 1..5

Burleson is the American Team

Note: This Oracle documentation was created as a support and Oracle training reference for use by our DBA performance tuning consulting professionals. Feel free to ask questions on our Oracle forum .

Verifyexperience!Anyone considering using the services of an Oracle support expert should independently investigate their credentials and experience, and not rely on advertisements and self-proclaimed expertise. All legitimate Oracle experts publish their Oracle qualifications .

Errata? Oracle technology is changing and we strive to update our BC Oracle support information. If you find an error or have a suggestion for improving our content, we would appreciate your feedback. Just e-mail: and include the URL for the page.


The Oracle of Database Support

Copyright 1996 – 2016

All rights reserved by Burleson

Oracle is the registered trademark of Oracle Corporation.


ColdFusion Shopping Cart #coldfusion #shopping #cart, #coldfusion #programming, #cold #fusion #shopping #cart, #cold #fusion #programming,


#

MERCHANT ACCOUNTS
CREDIT CARD PAYMENT GATEWAYS

cf_ezcart is 100% compatible with and approved by authorize.net, the Internet’s leading real time credit card processor. To learn more about Authorize.net or to apply for a merchant account. CLICK HERE.

cf_ezcart also supports several other ecommerce payment gateway providers, and PayPal. To see if your real time processor is supported, click here. We highly recommend Authorize.net for proven secure, unlimited online payment transactions available in multiple currencies. Authorize.net’s real-time credit card processing works on any server running ColdFusion, without the need for additional software installation.

cf_ezcart ColdFusion Shopping Cart Application
Version 10.1

Secure eCommerce Solution, Priced For The Small Business Owner.

NEW MERCHANTS. Are you trying to start up your first online store but just feel overwhelmed? Feel free to call or email us with any questions you may have. We’ll install and setup cf_ezcart, and integrate the basic installation into your web site for free. If you will be using your current host, please be sure you meet the system requirements at the bottom of this page.

WEBSITE DESIGNERS AND COLDFUSION DEVELOPERS. Do you need a quick and easy ecommerce solution for your customers? You will find cf_ezcart to be a simple and affordable answer. We’ll work with you and give you as much assistance as you need until you are comfortable setting up cf_ezcart on your own. Developers receive discounts after the first purchase. Visit our Pricing page for more information.

Search Engine Safe – Search Engine Friendly
Read More. View Examples.

Our secure, scalable, ecommerce shopping cart program was designed with the small business owner in mind. However, don’t let the price fool you. In development since 1999, and built on ColdFusion (ColdFusion is an application server designed for delivering powerful B2C or B2B web based solutions), cf_ezcart is a robust, scalable enterprise level application, capable of handling tens of thousands of products. cf_ezcart may also be deployed in a clustered server environment to meet the needs of even the busiest ecommerce site.

Fully Compatible With ColdFusion MX

Automated US State, County and City tax calculation based on the customer’s shipping (or billing) zip code at checkout. cf_ezcart is compatible with the file format provided by Tax Data Systems, Inc.. Enter and update State, County and City taxes for each zip code in an entire state in seconds. Should you decide to purchase cf_ezcart, you will be provided with a promotional code good for 10% off at Tax Data Systems. After initial entry, tax data may be updated manually by State, City or County, or automated using an updated tax data file if you subscribe to their monthly updates plan.

Packed With Powerful Features.

  • QuickBooks Compatible
  • Google Checkout Compatible
  • PayPalWebsite Payments Pro Compatible
  • PayPal’s Instant Payment Notification System
  • Search Engine Safe and Search Engine Friendly
  • Verified By Visa Supported
  • MasterCard SecureCode Supported
  • Web Based Administration
  • Multi-Level Affiliate Program
  • Unparalleled Shipping Features
  • Advanced Search Capabilities
  • Unlimited Product Styles Options
  • Works On Browsers With Cookies Disabled
  • Encourage Return Customers By Allowing bankable “Web Bucks”
  • Live Inventory Control
  • Comprehensive Order Reporting
  • Extensive Discount Options
  • Electronic Gift Certificates
  • Supports Numerous eCommerce Payment Gateways
  • Integrate Easily Into Existing Website (we’re happy to help)
  • And Much More!

Visit our ColdFusion Shopping Cart Features page to view ALL of the features that come with cf_ezcart Shopping Cart Application.

ColdFusion developers and/or website designers that need an ecommerce solution for their customers will find our cart to be a quick, simple and affordable answer. cf_ezcart will run on any ColdFusion server running Version 5.0 thru CFMX on Windows NT, Windows 2000, Windows 2003, Linux or Sun. You will receive a developer discount on all purchases after the first purchase. Supported databases include Access, MySQL, SQL Server 7 and SQL Server 2000. Visit our Pricing page for more information.

System Requirements for Shopping Cart

  • ColdFusion 5 supported, but may not work with some newer XML features, such as QuickBooks.
  • ColdFusion MX 6.1 (7 or higher recommended)
  • ColdFusion MX 7 or ColdFusion MX 8
  • MySQL, SQL Server 7, Server 2000, SQL Server 2005.
  • Access is supported but is not recommended in a production environment.
  • CFObject tag must be enabled to use some features and gateways. Note that GoDaddy’s ultra-cheap hosting does not meet this requirement.

System Requirements for QuickBooks Module

  • ColdFusion MX 7 or ColdFusion MX 8
  • SQL Server 7, SQL Server 2000 or SQL Server 2005 OR.
  • mySQL version 4 or higher.

System Requirements for Credit Card Payment Gateways

  • Please see Supported Payment Gateways for individual requirements.
  • CFObject tag must be enabled to use some gateways. Note that GoDaddy’s ultra-cheap hosting does not meet this requirement.

UPS is a registered trademark of United Parcel Service of America, Inc.

QuickBooks is a registered trademark and service mark of Intuit Inc. in the United States.

Tropical Web Creations, Inc. is a member of the Intuit Developer’s Network.

We have had numerous unpleasant experiences with “black hats” in the complex world of the web and e-commerce for business and another company we own. Bud and company, doubtless, wear the “white hats.” They are among the the most cooperative, customer focused, e-commerce folks with whom we have dealt.
Read More!
Jeff Capshaw
Card Quest, Inc.


How to check for last SQL Server backup #sql #server #database #backup #script


#

How to check for last SQL Server backup

As a database professional, I get asked to review the health of database environments very often. When I perform these reviews, one of the many checks I perform is reviewing backup history and making sure that the backup plans in place meet the requirements and service level agreements for the business. I have found a number of backup strategies implemented using full, differential and transaction log backups in some fashion.

In more cases then I would like to share, I have found business critical databases that are not being backed up properly. This could be in the worst case having no backups or a backup strategy that does not meet the recoverability requirement of the business.

When doing an initial check I gather many details about the environment. Regarding backups, I capture things such as recovery model, last full backup, last differential, and the last two transaction log backups. Having this information will allow me to determine what the backup strategy is and point out any recover-ability gaps.

Some examples I have found are 1) no backup’s period, 2) full backup from months ago and daily differentials. In this case the full had been purged from the system, 3) Full backup of user database in Full recovery mode with no transaction log backups, 4) Proper use of weekly full, daily differential, and schedule transaction log backups – however the schedule was set to hourly and the customer expected they would have no more than 15 minutes of data loss. I am happy to report that I do find proper backup routines that meet the customers’ service level agreement too.

The code I like to use for this check is below.

Ensuring that you have backups is crucial to any check of a SQL Server instance. In addition to ensuring that backups are being created, validation of those backups is just as important. Backups are only valid if you can restore them.

When I have the opportunity to share my experiences of backup and recovery with people I always like to share about how to backup the tail end of a transaction log and how to attach a transaction log from one database to another in order to backup the tail end of the log. I have created a couple of videos on how to accomplish this that you can view using this like http://www.timradney.com/taillogrestore

20 Responses to How to check for last SQL Server backup

Tim, I always like to add a disclaimer that just because the history is there doesn t mean the file is I ve seen times when they got cleaned up too soon! Another edge case is having deadlocks that prevent the history record being added, making it look like the backup didn t happen. I like the premise of checking/confirming backups match what they expect.

Very good points.

Andrew Alger says:

Tim,
Great script, thanks for sharing!
To take this one step further, I created a scheduled script that checks and alerts me if my backups have not been run within a set time. Server updates and restarts always seem to take place during my backup windows.

Also, enough cannot be said for validating backups. My sys admin was running nightly backups that messed up my backup chain and I had no idea until I began validating these. No need to say what would have happened had I needed to perform an actual restore

Kevin M Parks says:

care to share you scheduled script?

Hi Andrew,
Can you please share the scheduled script that checks and alerts with me?

Thanks for sharing your thoughts and your script.

I am curiuos as to what questions you would ask to determine what the SLA should be (or more specifically what point people want to be able to recover to)

In my experience I quite often find full recovery models with full, differential and transaction log backups in place for systems that I feel simply do not need that level of backup.

For instance I found a system backup scheduled to be restored daily onto a seperate database on a reporting server. This then had full backups with log file backups running on the reporting server.

In other instances, quite often sytems are able to dynamically recreate the data in an instant, but simply use the database as a convenience. In such cases, I would set the recovery model to simple and have no backups run. I then use a powershell script to shutdown the (Vsphere) server at night and simply backup the entire server in a shutdown state. Since it is in a shutdown state, the data file is backed up without a problem and since the SLA is content with falling back one day (which many production systems I have are) this seems to be the quickest model of recovery without any fuss or hunting for a script and backup files.

Great question. For me I typically ask two very simple questions. 1) How much data can you afford to lose 2) How long can your system be down. The typical response is none and none and then start the explanations and negotiations. I have systems like you mentioned where a full from the previous night is sufficient. Your scenario of using a file backup such as shutting down the service and backing up all the files meets your SLA. For a Reporting server like you mentioned that is just a restored copy from production on a daily basis then why backup at all? For organizations I support, we document the SLA (RPO and RTO) of each database and work to meet that.

Many times working with different lines of business requires explaining to the business how backups work and what are industry standards and what is realistic. When they don t want to hear that a 15 minute RPO is best then present them with the price tag to lower the RPO. It really boils down to numbers and dollars in some cases.

Awesome query! I ve been looking for something like this for awhile. If you are interested, I modified it a bit for my use and turned it into to a sProc that uses Dynamic SQL to check all my Servers and Instances. I m making myself a dashboard with this. I d like to share it with you.

I still have one more piece I d like to add involving xp_fileexist to complete my dashboard project. I hope to have solved that soon.

Thanks again for taking the time to share this with all of us! Outstanding job!


Deprecated Database Engine Features in SQL Server 2012 #sql #server #database #monitoring


#

Deprecated Database Engine Features in SQL Server 2012

This topic describes the deprecated SQL Server Database Engine features that are still available in SQL Server 2012. These features are scheduled to be removed in a future release of SQL Server. Deprecated features should not be used in new applications.

You can monitor the use of deprecated features by using the SQL Server Deprecated Features Object performance counter and trace events. For more information, see Use SQL Server Objects.

The following SQL Server Database Engine features will not be supported in the next version of SQL Server. Do not use these features in new development work, and modify applications that currently use these features as soon as possible. The Feature name value appears in trace events as the ObjectName and in performance counters and sys.dm_os_performance_counters as the instance name. The Feature ID value appears in trace events as the ObjectId.

Level0type = ‘type’ and Level0type = ‘USER’ to add extended properties to level-1 or level-2 type objects.

Use Level0type = ‘USER’ only to add an extended property directly to a user or role.

Use Level0type = ‘SCHEMA’ to add an extended property to level-1 types such as TABLE or VIEW, or level-2 types such as COLUMN or TRIGGER. For more information, see sp_addextendedproperty (Transact-SQL) .

Extended stored procedure programming

Use CLR Integration instead.

Extended stored procedure programming

Use CLR Integration instead.

Extended stored procedures

Use CREATE LOGIN

Use DROP LOGIN IsIntegratedSecurityOnly argument of SERVERPROPERTY

AlwaysOn Availability Groups

If your edition of SQL Server does not support AlwaysOn Availability Groups, use log shipping.

CREATE TABLE, ALTER TABLE, or CREATE INDEX syntax without parentheses around the options.

Rewrite the statement to use the current syntax.

sp_configure option ‘allow updates’

System tables are no longer updatable. Setting has no effect.

sp_configure ‘allow updates’

‘set working set size’

Now automatically configured. Setting has no effect.

sp_configure ‘open objects’

sp_configure ‘set working set size’

sp_configure option ‘priority boost’

System tables are no longer updatable. Setting has no effect. Use the Windows start /high … program.exe option instead.

sp_configure ‘priority boost’

sp_configure option ‘remote proc trans’

System tables are no longer updatable. Setting has no effect.

sp_configure ‘remote proc trans’

Specifying the SQLOLEDB provider for linked servers.

SQL Server Native Client (SQLNCLI)

SQLOLEDDB for linked servers

Native XML Web Services

The CREATE ENDPOINT or ALTER ENDPOINT statement with the FOR SOAP option.

Use Windows Communications Foundation (WCF) or ASP.NET instead.

The ALTER LOGIN WITH SET CREDENTIAL syntax

Replaced by the new ALTER LOGIN ADD and DROP CREDENTIAL syntax

ALTER LOGIN WITH SET CREDENTIAL

CREATE APPLICATION ROLE

DROP APPLICATION ROLE

ALTER APPLICATION ROLE

ALTER SCHEMA or ALTER AUTHORIZATION

ALTER LOGIN DISABLE

These stored procedures return information that was correct in SQL Server 2000. The output does not reflect changes to the permissions hierarchy implemented in SQL Server 2008. For more information, see Permissions of Fixed Server Roles .

GRANT, DENY, and REVOKE specific permissions.

PERMISSIONS intrinsic function

Query sys.fn_my_permissions instead.

RC4 and DESX encryption algorithms

Use another algorithm such as AES.

Server Configuration Options

c2 audit option

default trace enabled option

sp_configure ‘c2 audit mode’

sp_configure ‘default trace enabled’

SQL Server Agent

net send notification

Command or PowerShell scripts

SQL Server Management Studio

Solution Explorer integration in SQL Server Management Studio

Source Control integration in SQL Server Management Studio

System Stored Procedures

None. Support for increased partitions is available by default in SQL Server 2012

The compatibility views do not expose metadata for features that were introduced in SQL Server 2005. We recommend that you upgrade your applications to use catalog views. For more information, see Catalog Views (Transact-SQL) .

The use of the vardecimal storage format.

Vardecimal storage format is deprecated. SQL Server 2012 data compression, compresses decimal values as well as other data types. We recommend that you use data compression instead of the vardecimal storage format.

Vardecimal storage format

Use of the sp_db_vardecimal_storage_format procedure.

Vardecimal storage format is deprecated. SQL Server 2012 data compression, compresses decimal values as well as other data types. We recommend that you use data compression instead of the vardecimal storage format.

Use of the sp_estimated_rowsize_reduction_for_vardecimal procedure.

Use data compression and the sp_estimate_data_compression_savings procedure instead.

The cookie OUTPUT parameter for sp_setapprole is currently documented as varbinary(8000) which is the correct maximum length. However the current implementation returns varbinary(50). If developers have allocated varbinary(50) the application might require changes if the cookie return size increases in a future release. Though not a deprecation issue this is mentioned in this topic because the application adjustments are similar. For more information, see sp_setapprole (Transact-SQL) .

Reference

Community Additions

Show: Inherited Protected

IN THIS ARTICLE

Is this page helpful? Yes No

1500 characters remaining

Submit Skip this

Thank you! We appreciate your feedback.


Sql primary key and index – Stack Overflow #sql #server # #index #fragmentation


#

As everyone else have already said, primary keys are automatically indexed.

Creating more indexes on the primary key column only makes sense when you need to optimize a query that uses the primary key and some other specific columns. By creating another index on the primary key column and including some other columns with it, you may reach the desired optimization for a query.

For example you have a table with many columns but you are only querying ID, Name and Address columns. Taking ID as the primary key, we can create the following index that is built on ID but includes Name and Address columns.

So, when you use this query:

SQL Server will give you the result only using the index you’ve created and it’ll not read anything from the actual table.

NOTE: This answer addresses enterprise-class development in-the-large .

This is an RDBMS issue, not just SQL Server, and the behavior can be very interesting. For one, while it is common for primary keys to be automatically (uniquely) indexed, it is NOT absolute. There are times when it is essential that a primary key NOT be uniquely indexed.

In most RDBMSs, a unique index will automatically be created on a primary key if one does not already exist. Therefore, you can create your own index on the primary key column before declaring it as a primary key, then that index will be used (if acceptable) by the database engine when you apply the primary key declaration. Often, you can create the primary key and allow its default unique index to be created, then create your own alternate index on that column, then drop the default index.

Now for the fun part–when do you NOT want a unique primary key index? You don’t want one, and can’t tolerate one, when your table acquires enough data (rows) to make the maintenance of the index too expensive. This varies based on the hardware, the RDBMS engine, characteristics of the table and the database, and the system load. However, it typically begins to manifest once a table reaches a few million rows.

The essential issue is that each insert of a row or update of the primary key column results in an index scan to ensure uniqueness. That unique index scan (or its equivalent in whichever RDBMS) becomes much more expensive as the table grows, until it dominates the performance of the table.

I have dealt with this issue many times with tables as large as two billion rows, 8 TBs of storage, and forty million row inserts per day. I was tasked to redesign the system involved, which included dropping the unique primary key index practically as step one. Indeed, dropping that index was necessary in production simply to recover from an outage, before we even got close to a redesign. That redesign included finding other ways to ensure the uniqueness of the primary key and to provide quick access to the data.

answered Jan 20 ’09 at 20:07

I know this is an ancient topic, but I don t understand how a uniqueness scan of one index would be such a load on the system. A B+tree scan should be O(log n) * v where v is constrained overhead for index fragmentation, imperfect tree balance, etc. Thus 2 billion rows would be log base 2 of 2,000,000,000 (about 31 seeks) times, say, 2 or 3 or even 10. 40M inserts per day is about 462/sec,

100 IO s per insert. Ahh. Oh. I see. And this was before widespread SSDs. Charles Burns Apr 20 at 22:32


Restore your SQL Server database using transaction logs #how #to #restore #a #sql #database


#

Restore your SQL Server database using transaction logs

Most DBAs dread hearing that they need to restore a database to a point in time, especially if the database is a production database. However, knowing how to do this is of the utmost importance for a DBA’s skill set. I’ll walk you through the steps of how to restore a SQL Server database to a point in time to recover a data table.

The scenario

A coworker calls you in a panic because he accidentally deleted some production data, and he wants you to restore the lost records.

If you are lucky, you have a data auditing system in place, and you can restore these records from an audit table. If you do not have a tool that will read a transaction log so that you can to undo transactions, you will likely need to restore the altered database to a certain point in time on the same or separate server than the server hosting the current database instance.

The restoration process

Note that, for the purpose of this article, I am assuming that your database recovery mode is set to FULL.

The first step in the process is to perform a tail-log backup. You want to perform this type of backup before a database restore to ensure that any records that have changed since the last backup are available to be included in the restore process.

Next you should locate where the database backup files are stored on the machine or the network. It may be a good idea to copy these files to your target server if you are going to be restoring the database on a different server. In the backup file location, find the very last full database backup that was completed (these files usually end with the extension .bak); you need to restore this full backup. The script below applies the full backup file to the NewDatabase database:

RESTORE DATABASE NewDatabase

FROM DISK = ‘D: \BackupFiles\TestDatabaseFullBackup.bak’

MOVE ‘PreviousDatabase’ TO ‘D:\DataFiles \TestDatabase.mdf’,

MOVE ‘PreviousDatabase_log’ TO ‘D:\DataFiles \TestDatabase_Log.ldf’,

The code specifies that the location of the full backup file is on your server’s D drive and that you are restoring the file to the database named NewDatabase. The statement moves the data file and the log file from the full backup to new files for my TestDatabase database. The last statement in the script, NORECOVERY, is very crucial. The NORECOVERY mode is one of three available options, which are outlined below.

  • NORECOVERY: Tells SQL Server that you are not finished restoring the database and that subsequent restore files will occur. While the database is in this state, the database is not yet available, so no connections are allowed.
  • RECOVERY: Tells SQL Server that you are finished restoring the database, and it is ready to be used. This is the default option, and it is by far the one that is used most often.
  • STANDBY: Tells SQL Server that the current database is not yet ready to be fully recovered and that subsequent log files can be applied to the restore. You can use this option so that connections are available to the restore database if necessary. However, future transaction logs can only be applied to the database if no current connections exist.

Once you restore the full backup using the NORECOVERY option, you can begin applying the transaction log backups or the differential backup.

A differential backup is a backup of any changes to the database that have occurred since the last full database backup. If you have multiple differential backups, you will only need to restore the very last one taken. In this situation, there are no differential backups, so you can move directly to the transaction log backups.

Transaction log backups A transaction log backup keeps track of all transactions that have occurred since the last transaction log backup; it also allows you to restore your database to a point in time before a database error occurred. Transaction log backups occur in sequence, creating a chain. When restoring a sequence of transaction log backups to a point in time, it is required that the transaction log files are restored in order.

When you use a database maintenance plan to create the transaction log backups, a time indicator is typically included in the transaction log file name. The script below applies three transaction log backups using the NORECOVERY option, and the last statement restores the database to availability to the time frame at the very end of the last transaction log file.

RESTORE LOG NewDatabase

FROM DISK = ”D: \BackupFiles\TestDatabase_TransactionLogBackup1.trn’

RESTORE LOG NewDatabase

FROM DISK = ”D: \BackupFiles\ TestDatabase_TransactionLogBackup2.trn’

RESTORE LOG NewDatabase

FROM DISK = ”D: \BackupFiles\ TestDatabase_TransactionLogBackup3.trn’

RESTORE LOG NewDatabase

FROM DISK = ”D: \BackupFiles\ TestDatabase_TransactionLogBackup4.trn’

Restoring to a point in time In the example above, you restore the database to the end of the last transaction log. If you want to recover your database to a specific point in time before the end of the transaction log, you must use the STOPAT option. The script below restores the fourth transaction log in the log sequence to 4:01 PM — just before the database mishap occurred.

RESTORE LOG NewDatabase

FROM DISK = ”D: \BackupFiles\ TestDatabase_TransactionLogBackup4.trn’

WITH STOPAT = N’6/28/2007 4:01:45 PM’, RECOVERY

Now that you have the database restore to a point where you need it to be, it is time to decide how to help the developers in order to make their situation a little bit easier. My suggestion is to copy the table the developers need to a separate table on the server so that you or they can correct the data problem.

Be prepared

Restoring your database to a point in time is one of those things that you never want to have to use, but you need to be able to complete it if necessary. I took an overview approach as to how to restore your SQL Server database using transaction logs for a certain circumstance. It’s important to note that companies use different approaches for backing up data, so it is very important to be thoroughly involved in your company’s database backup process. Be sure to test restores and restore situations often so that you are ready when a disaster does occur.

Get SQL tips in your inbox

TechRepublic’s free SQL Server newsletter, delivered each Tuesday, contains hands-on tips that will help you become more adept with this powerful relational database management system. Automatically subscribe today!

About Tim Chapman

Tim Chapman is a SQL Server MVP, a database architect, and an administrator who works as an independent consultant in Raleigh, NC, and has more than nine years of IT experience.

Full Bio

Tim Chapman is a SQL Server MVP, a database architect, and an administrator who works as an independent consultant in Raleigh, NC, and has more than nine years of IT experience.


Remote sql server dba #remote #sql #server #dba


#

Database Services Solutions!

Get Around the Clock Database Support at an Affordable Price.

Dobler Consulting Remote Database Administration and Management services are branded as SpectrumDB. With several flexible options to choose from, SpectrumDB allows organizations of all sizes to supplement their database administrative needs with remote, highly qualified, on-shore database experts 24/7. Leverage our experts to increase your organization’s knowledge, mobility, flexibility, business continuity and data security. The program is optimized for Sybase, SQL Server, Oracle, MySQL and MongoDB database support.

A Better Approach To Building a Data Warehouse.

XpressInsight is a Framework as a Service (FaaS) Data Warehousing and Business Intelligence solution. Custom data warehouses can be built in a fraction of the time as more traditional approaches, at the low cost of cloud computing. Our set of software tools and design patterns can be leveraged to solve your reporting problems using data from any source in your organization. It is hosted in the secure cloud and is certified HIPAA, SSAE16, and PCI compliant.

Transform Raw Data into Informed Business Decisions.

Whether you want to implement XpressInsight or need support with another data warehouse solution, Dobler Consulting will provide the experts you need to meet your timeline and budget. We can plan, support and execute your Data Mining, ETL End-to-End Development, BI Reporting and Data Masking projects.

Database Expertise and Strategic Guidance to Take You to the Next Level.

Your database is a virtual extension of your business. You must be diligent about keeping it secure, properly maintained and running at optimum performance to stay competitive. But with increasing data demands, evolving cyber security threats and emerging technological innovations, you need a partner that can help plan and implement database strategies that keep you moving forward with confidence. Our FREE Database Health Check will provide piece of mind.

Simplify Database License Procurement and Compliance.

Dobler Consulting provides expert assistance with the acquisition, management and proper implementation of SAP, Microsoft, and Oracle products. Our goal is to connect you with software that best fits your current and future needs, then provide you with the most up-to-date knowledge, best practices and tools to make the most of your investment.

Cut Expenses, Improve Performance and Enjoy Greater Peace of Mind.

Dobler Consulting virtual hosting empowers you to deploy a range of computing solutions—desktops and applications, development and testing, backup and recovery, data storage and more—while maximizing resiliency, redundancy and security.

Dobler Consulting is a leading provider of database services and information technology support serving clients ranging from small business to FORTUNE companies across multiple industry verticals.

Financial Services

Healthcare

Manufacturing

Retail

Education

Transportation

We’re the Database Experts:

As industry-recognized experts in SAP, Sybase, SQL Server, Oracle and other database solutions, Dobler Consulting empowers businesses like yours to get the most from their database and associated systems. Our team of highly trained and certified experts assess your unique needs and challenges, then develop systematic strategies for optimizing security, performance, reliability and accessibility. Our services include:

  • Database managed services and remote administration
  • Database health checks and performance assessments
  • Business intelligence solution implementation
  • License acquisition and management
  • Staffing and permanent placement of DBAs and specialized IT experts
  • Server hosting and virtualization
  • Backup, recovery and business continuity planning
  • Security and compliance planning and implementation
  • Database migration and deployment

Peter and the whole Dobler Consulting organization is truly a pleasure to work with and have the expertise to implement Sybase Systems. They listen carefully to customers’ requirements and offer suggestions to customers to provide the best overall solution. I would not hesitate to recommend Dobler Consulting.

Bill Dodd Sr. Product Sales Executive, SAP Inc.

Dobler Consulting has been an invaluable resource for maintaining our Sybase databases. Our rep, Bob Barker, always responds quickly and effectively to issues. He also does periodic checks on our databases to ensure we avoid problems. We have a real comfort level knowing such a knowledgeable resource is available and a part of our team.

LMG Applications Support Team Local Media Group, Inc.

We have so appreciated Peter and his staff at Dobler Consulting and for their support for our company as it has enabled us to manage our SharePoint site effectively. SharePoint is critical to our company, and thanks to Dobler Consulting we have not had any outages and are able to organize and share assets with all staff members. As a small company, Dobler has allowed us utilize the full potential of SharePoint to our advantage. The response time for any issues we might have has been very prompt and the service from Dobler Consulting has been exemplary.

Troy Miller NRB Network, Inc. President/CEO

Dobler Consulting and their team are excellent strategy partners in the world of SAP, Sybase, SQL, and large-scale customer migrations. Peter and his team provided personal support, deep insight and excellent strategic advice for some of our largest customer scenarios, and went far beyond the call of duty in doing so. Hitachi Consulting is very pleased with the partnering expertise that Dobler Consulting provides, as well as the personalized attention and insight they add to the equation.

David Brown Director, Business Development

See More Testimonials


SQL Data Generator – Data Generator For MS SQL Server Databases #sql #server #code


#

Generate realistic test data fast

An introduction to SQL Data Generator

Features

  • Create large volumes of data within a couple of clicks in SQL Server Management Studio
  • Generate meaningful test data at row level
  • Column-intelligent data generation – generate data in one column based on the data in another
  • Greater flexibility and manual control for creating foreign key data
  • Extremely fast data generation
  • Over 60 built-in generators with sensible configuration options
  • Shareable custom generators – save regexp and SQL statement generators to share with your team
  • Write your own custom generators in Python, so you can create any extra data you need
  • Seeded random data generation allows you to generate the same collection of data every time
  • Foreign key support for generating consistent data across multiple tables
  • Inter-column dependency support
  • Command-line support for automated data generation
  • Import data from existing data sources
  • Automatic data conversion when the source data is a different data type
  • Optionally disable triggers and constraints to avoid interfering with database logic
  • Support for Microsoft SQL Server 2005, 2008, 2012 R2, 2014, and SQL Server on Amazon RDS

Screenshot tour

Preview and customize the data

Generate your test data

The data has been created

Cross column generators generate data based on other columns

Use Python scripts to generate your own custom data

Generate data from within SQL Server Management Studio

Case Study

“In less than the time it took me to get my coffee, I had a database with 2 million rows of data for each of 10 tables.”
— Stephanie Beach, QA Manager, Certica Solutions

Stephanie Beach explains how the speed, simplicity and intelligence of SQL Data Generator have proved invaluable in setting up QC departments for start-up companies.

Troy Hunt: Test data done right with SQL Data Generator

Software architect and Microsoft MVP Troy Hunt takes a look at creating realistic test data with SQL Data Generator.

What our customers are saying

SQL Data Generator is almost magical when you see it in action over your own data schema

Troy Hunt, Software Architect

Redgate’s SQL Data Generator inserts 500K rows in the same time that Visual Studio Team System does 300 – not 300K, just 300.

Barry Gervin, ObjectSharp Consulting

SQL Data Generator is simple and effective. It used to take me hours to generate useful test data, for example so we could show our web product to clients. With SQL Data Generator I generated better data in only half an hour, and then, after this initial customisation was done, in only seconds, with just one click.

Michael Gaertner, Quintech

In less than the time it took me to get my coffee, I had a database with 2 million rows of data for each of 10 tables. The database was filled with proper names, cities, geographical locations, FK links and I was able to use the Regex Generator to finely tune specific columns data.

Stephanie Beach, QA Manager, Certica Solutions

A fantastic tool that is very much needed by SQL Server DBAs and developers.

Part of our Database DevOps solution

Redgate’s Database DevOps solution lets you extend your DevOps practices to SQL Server databases so that you can optimize productivity, agility and performance across the full database lifecycle and become a truly high performing IT organization.

From safely making a change in development through to monitoring its impact in production, Redgate is with you every step of the way. We give you the tools and insight you need to optimize your development processes, so you can keep your team moving, keep adding value and keep your data safe.


Introduction to advanced queuing #adrian’s #oracle #pages,oracle-developer.net,oracle #developer,oracle,sql,pl/sql


#

oracle-developer.net

introduction to advanced queuing

Advanced Queuing (AQ) has been available for several versions of Oracle. It is Oracle’s native messaging software and is being extended and enhanced with every release. This article provides a high-level overview of Advanced Queuing (known as Streams AQ in 10g). In particular, we will see how to setup a queue for simple enqueue-dequeue operations and also create automatic (asynchronous) dequeuing via notification.

Note that AQ supports listening for messages from outside the database (such as JMS queues). As this article is introductory in nature, we will not cover this functionality. Instead we will concentrate solely on in-database messaging.

requirements

The examples in this article require the following specific roles and privileges (in addition to the more standard CREATE SESSION/TABLE/PROCEDURE/TYPE and a tablespace quota):

  • AQ_ADMINISTRATOR_ROLE: to create queue tables and queues; and
  • EXECUTE ON DBMS_AQ: to enable compilation of a PL/SQL procedure during the notification example.

In addition, standard application users that need to enqueue/dequeue messages will require AQ privileges provided via the DBMS_AQADM.[GRANT|REVOKE]_QUEUE_PRIVILEGE APIs.

The examples in this article can be run under any user with the above privileges. I have specifically ensured that schema qualifiers are excluded from all DBMS_AQADM procedure calls (many procedures in DBMS_AQADM require us to specify the names of schema objects to be created or dropped. The schema name can optionally be included to create the objects in another schema, but defaults to current schema if excluded).

creating and starting a queue

AQ handles messages known as “payloads”. The format and structure of the messages are designed by us and can be either user-defined objects or instances of XMLType or ANYDATA (as of 9i). When we create a queue, we need to tell Oracle the payload structure, so we’ll begin by creating a very simple object type for our messages.

Our payload type contains just one attribute. In real applications, our payloads are likely to be far more complex in structure. Now we have the payload defined, we can create a queue table. This table will be used by Oracle to store queued messages until such time that they are permanently dequeued. Queue tables are created using the DBMS_AQADM package as follows.

We are now ready to create a queue and start it, as follows.

By now, we have created a queue payload, a queue table and a queue itself. We can see what objects DBMS_AQADM has created in support of our queue. Note that the payload type is excluded as we created it explicitly ourselves.

We can see that a single queue generates a range of system-generated objects, some of which can be of direct use to us, as we will see later. Interestingly, a second queue is created. This is known as an exception queue. If AQ cannot retrieve a message from our user-queue, it will be placed on the exception queue.

enqueuing messages

We are now ready to enqueue a single message using the DBMS_AQ.ENQUEUE API. In the following example, we enqueue a single message using default options for the ENQUEUE procedure. DBMS_AQ has a wide range of record and array types to support its interfaces and to enable us to modify its behaviour (we can see two of these referenced in the example below).

We can see that enqueuing a message is very simple. The enqueue operation is essentially a transaction (as it writes to the queue table), hence we needed to commit it.

browsing messages

Before we dequeue the message we just placed on the queue, we’ll “browse” the queue contents. First we can query the AQ$DEMO_QUEUE_TABLE view to see how many messages there are to be dequeued. As we saw earlier, this view was created automatically by DBMS_AQADM.CREATE_QUEUE_TABLE when we created our queue.

As expected, we have just one message on our queue. We can browse the contents of the enqueued messages via this view without taking them off the queue. We have two methods for browsing. First, we can query the view directly as follows.

Second, we can use the DBMS_AQ.DEQUEUE API to browse our messages. We haven’t seen the DEQUEUE API up to this point, but as its name suggests, it’s the DBMS_AQ procedure for dequeuing messages. As with the ENQUEUE API, the DEQUEUE procedure accepts a range of options and properties as parameters. To browse messages without removing them from the queue, we can modify the dequeue properties to use the constant DBMS_AQ.BROWSE (default is DBMS_AQ.REMOVE).

Given this, we can now browse our queue contents.

We can easily confirm that our data hasn’t been dequeued by browsing as follows.

dequeuing messages

Now we will actually dequeue the message. This doesn’t have to be from the same session (remember that enqueues are committed transactions and AQ is table-based). Like the enqueue, the dequeue is a transaction (removing the message from the queue table). If we are happy with the message, we must commit the dequeue.

We can confirm that the message is no longer in our queue.

notification

For the remainder of this article, we will look at automatic dequeue via notification. By this we mean that whenever a message is enqueued, Oracle will notify an agent to execute a registered PL/SQL “callback” procedure (alternatively, the agent can notify an email address or http:// address rather than execute a callback procedure).

For our demonstration, we’ll create and register a PL/SQL procedure to manage our dequeue via notification. This callback procedure will dequeue the message and write it to a database table, to simulate the type of standard in-database operation that callback procedures are used for.

To begin, we’ll clear down the objects created for the previous examples. The supported method is via DBMS_AQADM only as follows.

Now we can re-create the queue table to allow multiple consumers. A consumer is an agent that dequeues messages (i.e. reads them off the queue). Enabling multiple consumers is a pre-requisite for automatic notification.

Now we can re-create and start our queue.

To demonstrate the asynchronous nature of notification via callback, we are going to store our dequeued messages in an application table.

Now we have an application table, we can create our callback PL/SQL procedure. This procedure will dequeue the enqueued message that triggered the notification. The parameters must be named and typed as shown. The enqueued message will include the enqueue timestamp, as will the insert into our application table. This will give us an idea of the asynchronous delay between the message enqueue and the notification for dequeue.

We are not quite finished with our notification setup yet. We need to add a named subscriber to the queue and register the action that the subscriber will take on notification (i.e. it will execute our callback procedure). We add and register the subscriber as follows.

That completes the setup. We can now test it by enqueuing a message. This message will simply comprise a timestamp of the enqueue so we can compare it with the time that the automatic dequeue happens.

To see if our message was automatically dequeued, we’ll check our application table (DEMO_QUEUE_MESSAGE_TABLE). Remember that this is the table that the callback procedure will insert the dequeued message into. In running these examples, it might be necessary to sleep for a short period, because the dequeue is asynchronous and runs as a separate session in the background.

We can see that the asynchronous dequeue via notification occurred approximately 5 seconds after our enqueue operation.

further reading

We’ve only just touched on AQ’s capabilities in this article. AQ is an enormous application that covers a broad range of scenarios and requirements that are far more sophisticated than this simple introduction. For more information on its potential uses, in addition to its syntactic usage, refer to the Advanced Queuing Guide in the online documentation (or equivalent for your database version).

acknowledgements

The notification and callback example is credited to a thread on Ask Tom entitled Advanced Queuing & PL/SQL Notification .

source code

The source code for the examples in this article can be downloaded from here .

Adrian Billington, July 2005

oracle-developer.net 2002-2017 copyright Adrian Billington all rights reserved | original template by SmallPark | last updated 25 January 2016