This is a major release with many new features and several breaking changes.
- All queries now use a new implementation, using a job and polling for results.
- The copy, load, extract methods now all have high-level and low-level versions, similar to
queryandquery_job. - Added asynchronous row insertion, allowing data to be collected and inserted in batches.
- Support external data sources for both queries and table views.
- Added create-on-insert support for tables.
- Allow for customizing job IDs to aid in organizing jobs.
- Update high-level queries as follows:
- Update
QueryJob#wait_until_done!to usegetQueryResults. - Update
Project#queryandDataset#querywith breaking changes:- Remove
timeoutanddryrunparameters. - Change return type from
QueryDatatoData.
- Remove
- Add
QueryJob#data - Alias
QueryJob#query_resultstoQueryJob#datawith breaking changes:- Remove the
timeoutparameter. - Change the return type from
QueryDatatoData.
- Remove the
- Update
View#datawith breaking changes:- Remove the
timeoutanddryrunparameters. - Change the return type from
QueryDatatoData.
- Remove the
- Remove
QueryData. - Update
Project#queryandDataset#querywith improved errors, replacing the previous simple error with one that contains all available information for why the job failed.
- Update
- Rename
Dataset#loadtoDataset#load_job; add high-level, synchronous version asDataset#load. - Rename
Table#copytoTable#copy_job; add high-level, synchronous version asTable#copy. - Rename
Table#extracttoTable#extract_job; add high-level, synchronous version asTable#extract. - Rename
Table#loadtoTable#load_job; add high-level, synchronous version asTable#load. - Add support for querying external data sources with
External. - Add
Table::AsyncInserter,Dataset#insert_asyncandTable#insert_asyncto collect and insert rows in batches. - Add
Dataset#insertto support creating a table while inserting rows if the table does not exist. - Update retry logic to conform to the BigQuery SLA.
- Use a minimum back-off interval of 1 second; for each consecutive error, increase the back-off interval exponentially up to 32 seconds.
- Retry if all error reasons are retriable, not if any of the error reasons are retriable.
- Add support for labels to
Dataset,Table,ViewandJob.- Add
filteroption toProject#datasetsandProject#jobs.
- Add
- Add support for user-defined functions to
Project#query_job,Dataset#query_job,QueryJobandView. - In
Dataset,Table, andViewupdates, add the use of ETags for optimistic concurrency control. - Update
Dataset#loadandTable#load:- Add
null_markeroption andLoadJob#null_marker. - Add
autodetectoption andLoadJob#autodetect?.
- Add
- Fix the default value for
LoadJob#quoted_newlines?. - Add
job_idandprefixoptions for controlling client-side job ID generation toProject#query_job,Dataset#load,Dataset#query_job,Table#copy,Table#extract, andTable#load. - Add
Job#user_email. - Set the maximum delay of
Job#wait_until_done!polling to 60 seconds. - Automatically retry
Job#cancel. - Allow users to specify if a
Viewquery is using Standard vs. Legacy SQL. - Add
projectoption toProject#query_job. - Add
QueryJob#query_plan,QueryJob::StageandQueryJob::Stepto expose query plan information. - Add
Table#buffer_bytes,Table#buffer_rowsandTable#buffer_oldest_atto expose streaming buffer information. - Update
Dataset#insertandTable#insertto raise an error ifrowsis empty. - Update
Errorwith a mapping from code 412 toFailedPreconditionError. - Update
Data#schemato freeze the returnedSchemaobject (as inViewandLoadJob.)
- Update Google API Client dependency to 0.14.x.
- Add
InsertResponse::InsertError#index(zedalaye)
- Add
maximum_billing_tierandmaximum_bytes_billedtoQueryJob,Project#query_jobandDataset#query_job. - Add
Dataset#loadto support creating, configuring and loading a table in one API call. - Add
Project#schema. - Upgrade dependency on Google API Client.
- Update gem spec homepage links.
- Update examples of field access to use symbols instead of strings in the documentation.
- Upgrade dependency on Google API Client
- Add
#canceltoJob - Updated documentation
Major release, several new features, some breaking changes.
- Standard SQL is now the default syntax.
- Legacy SQL syntax can be enabled by providing
legacy_sql: true. - Several fixes to how data values are formatted when returned from BigQuery.
- Returned data rows are now hashes with Symbol keys instead of String keys.
- Several fixes to how data values are formatted when importing to BigQuery.
- Several improvements to manipulating table schema fields.
- Removal of
Schema#fields=andData#rawmethods. - Removal of
fieldsargument fromDataset#create_tablemethod. - Dependency on Google API Client has been updated to 0.10.x.
- Support Query Parameters using
paramsmethod arguments toqueryandquery_job - Add
standard_sql/legacy_sqlmethod arguments to toqueryandquery_job - Add
standard_sql?/legacy_sql?attributes toQueryJob - Many documentation improvements
- New service constructor Google::Cloud::Bigquery.new
- Add list of projects that the current credentials can access. (remi)
- Fix for timeout on uploads.
This gem contains the Google BigQuery service implementation for the google-cloud gem. The google-cloud gem replaces the old gcloud gem. Legacy code can continue to use the gcloud gem.
- Namespace is now
Google::Cloud - The
google-cloudgem is now an umbrella package for individual gems