Skip to content

append, append_row, append_column: methods for appending an array or single rows and columns #932

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 30 commits into from
Apr 13, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
115b2af
zip: In Zip, move dimension check to free function
bluss Apr 3, 2021
13dab58
append: Add try_append_row and try_append_column
bluss Mar 5, 2021
af47658
append: .try_append_array() to append array to array
bluss Mar 27, 2021
a7e3aab
append: Update doc comment for try_append_array
bluss Apr 3, 2021
f003aaa
append: Solve axis iteration order problem by sorting axes
bluss Apr 2, 2021
29896d8
append: Fix situations where we need to recompute stride
bluss Apr 3, 2021
54359a9
append: Use try_append_array for append row/column
bluss Apr 3, 2021
6ca4275
stacking: Add a few more test cases for concatenate
bluss Apr 3, 2021
e862946
stacking: Port concatenate to append, supporting Clone
bluss Apr 3, 2021
0159715
stacking: Port stack to append, supporting Clone
bluss Apr 3, 2021
9ad7a0f
stacking: .select() now supports A: Clone
bluss Apr 3, 2021
6477157
zip: Fix iteration order debug assertion in Zip
bluss Apr 3, 2021
0599fd2
append: Handle empty arrays in append
bluss Apr 3, 2021
0c578aa
append: Add append test that deals with incorrect layout situations
bluss Apr 3, 2021
aad9c74
stacking: Add benchmarks for .select() (based on append)
bluss Apr 4, 2021
3d9a9ba
move_into: New method .move_into() for moving all array elements
bluss Apr 6, 2021
f153dc4
append: Adapt memory layout automatically in append if needed
bluss Apr 6, 2021
e0d07d3
append: Handle negative stride arrays
bluss Apr 7, 2021
b09c770
append: Name methods .append(axis, array), .append_row/column()
bluss Apr 7, 2021
7d522b9
append: Add more append tests for negative stride arrays
bluss Apr 7, 2021
d4819cc
zip: Make sure Layout::tendency is inlinable
bluss Apr 7, 2021
f2fd6c2
zip: Add Zip::and_unchecked and use to skip redundant dimension check
bluss Apr 7, 2021
c1a5ceb
stacking: Improve .select() with special case for 1D arrays
bluss Apr 8, 2021
8d44bbc
append: Edits and clarifications in append comments
bluss Apr 11, 2021
b103515
stacking: Update concatenate test
bluss Apr 11, 2021
59013cd
move_into: Implement inner-dimension optimization in move-into
bluss Apr 11, 2021
c9dd195
append: Add comment explaining the SetLenOnDrop optimization
bluss Apr 11, 2021
3f59442
append: Use standard order for non-growing axes in append
bluss Apr 11, 2021
1206bd6
append: Use positive stride and ignore stride of len 1 axes
bluss Apr 11, 2021
d946360
append: performance optimize stride check
bluss Apr 13, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions benches/append.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
#![feature(test)]

extern crate test;
use test::Bencher;

use ndarray::prelude::*;

#[bench]
fn select_axis0(bench: &mut Bencher) {
let a = Array::<f32, _>::zeros((256, 256));
let selectable = vec![0, 1, 2, 0, 1, 3, 0, 4, 16, 32, 128, 147, 149, 220, 221, 255, 221, 0, 1];
bench.iter(|| {
a.select(Axis(0), &selectable)
});
}

#[bench]
fn select_axis1(bench: &mut Bencher) {
let a = Array::<f32, _>::zeros((256, 256));
let selectable = vec![0, 1, 2, 0, 1, 3, 0, 4, 16, 32, 128, 147, 149, 220, 221, 255, 221, 0, 1];
bench.iter(|| {
a.select(Axis(1), &selectable)
});
}

#[bench]
fn select_1d(bench: &mut Bencher) {
let a = Array::<f32, _>::zeros(1024);
let mut selectable = (0..a.len()).step_by(17).collect::<Vec<_>>();
selectable.extend(selectable.clone().iter().rev());

bench.iter(|| {
a.select(Axis(0), &selectable)
});
}
38 changes: 38 additions & 0 deletions src/data_repr.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ use alloc::borrow::ToOwned;
use alloc::vec::Vec;
use crate::extension::nonnull;

use rawpointer::PointerExt;

/// Array's representation.
///
/// *Don’t use this type directly—use the type alias
Expand Down Expand Up @@ -55,6 +57,37 @@ impl<A> OwnedRepr<A> {
self.ptr
}

/// Return end pointer
pub(crate) fn as_end_nonnull(&self) -> NonNull<A> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name end_nonnull sounds a little better to me.

Copy link
Member Author

@bluss bluss Apr 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll leave this for now, I forgot about it, that's mostly why. And that the convention is weirdly strong with preferring "as_" for pointer accessors, so that's what a couple of our methods have. I don't even like .as_ptr() - it should normally be just .ptr() etc.

unsafe {
self.ptr.add(self.len)
}
}

/// Reserve `additional` elements; return the new pointer
///
/// ## Safety
///
/// Note that existing pointers into the data are invalidated
#[must_use = "must use new pointer to update existing pointers"]
pub(crate) fn reserve(&mut self, additional: usize) -> NonNull<A> {
self.modify_as_vec(|mut v| {
v.reserve(additional);
v
});
self.as_nonnull_mut()
}

/// Set the valid length of the data
///
/// ## Safety
///
/// The first `new_len` elements of the data should be valid.
pub(crate) unsafe fn set_len(&mut self, new_len: usize) {
debug_assert!(new_len <= self.capacity);
self.len = new_len;
}

/// Cast self into equivalent repr of other element type
///
/// ## Safety
Expand All @@ -72,6 +105,11 @@ impl<A> OwnedRepr<A> {
}
}

fn modify_as_vec(&mut self, f: impl FnOnce(Vec<A>) -> Vec<A>) {
let v = self.take_as_vec();
*self = Self::from(f(v));
}

fn take_as_vec(&mut self) -> Vec<A> {
let capacity = self.capacity;
let len = self.len;
Expand Down
39 changes: 29 additions & 10 deletions src/impl_methods.rs
Original file line number Diff line number Diff line change
Expand Up @@ -873,20 +873,39 @@ where
/// ```
pub fn select(&self, axis: Axis, indices: &[Ix]) -> Array<A, D>
where
A: Copy,
A: Clone,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! :)

S: Data,
D: RemoveAxis,
{
let mut subs = vec![self.view(); indices.len()];
for (&i, sub) in zip(indices, &mut subs[..]) {
sub.collapse_axis(axis, i);
}
if subs.is_empty() {
let mut dim = self.raw_dim();
dim.set_axis(axis, 0);
unsafe { Array::from_shape_vec_unchecked(dim, vec![]) }
if self.ndim() == 1 {
// using .len_of(axis) means that we check if `axis` is in bounds too.
let axis_len = self.len_of(axis);
// bounds check the indices first
if let Some(max_index) = indices.iter().cloned().max() {
if max_index >= axis_len {
panic!("ndarray: index {} is out of bounds in array of len {}",
max_index, self.len_of(axis));
}
} // else: indices empty is ok
let view = self.view().into_dimensionality::<Ix1>().unwrap();
Array::from_iter(indices.iter().map(move |&index| {
// Safety: bounds checked indexes
unsafe {
view.uget(index).clone()
}
})).into_dimensionality::<D>().unwrap()
} else {
concatenate(axis, &subs).unwrap()
let mut subs = vec![self.view(); indices.len()];
for (&i, sub) in zip(indices, &mut subs[..]) {
sub.collapse_axis(axis, i);
}
if subs.is_empty() {
let mut dim = self.raw_dim();
dim.set_axis(axis, 0);
unsafe { Array::from_shape_vec_unchecked(dim, vec![]) }
} else {
concatenate(axis, &subs).unwrap()
}
}
}

Expand Down
Loading